Karinne Ramirez-Amaro

Assistant professor, Electrical Engineering

Dr. Karinne Ramirez-Amaro is an Assistant professor in the research group Mechatronics since September 2019. Previously, she was a post-doctoral researcher at the Chair for Cognitive Systems (ICS) at the Technical University of Munich (TUM). She completed her Ph.D. (summa cum laude) at the Department of Electrical and Computer Engineering at the Technical University of Munich (TUM), Germany in 2015. From October 2009 until Dec 2012, she was a member of the Intelligent Autonomous Systems (IAS) group headed by Prof. Michael Beetz. She received a Master degree in Computer Science (with honours) at the Center for Computing Research of the National Polytechnic Institute (CIC-IPN) in Mexico City, Mexico in 2007.
Dr. Ramirez-Amaro received the Laura Bassi award granted by TUM and the Bavarian government to conduct a one-year research project in December 2015. For her doctoral thesis, she was awarded the price of excellent Doctoral degree for female engineering students, granted by the state of Bavaria, Germany in September 2015. In addition, she was granted a scholarship for a Ph. D. research by DAAD – CONACYT and she received the Google Anita Borg scholarship in 2011. She was involved in the EU FP7 project Factory-in-a-day and in the DFG-SFB project EASE. Her research interests include Artificial Intelligence, Semantic Representations, Assistive Robotics, Expert Systems, and Human Activity Recognition and Understanding.
​SSY235 - Decision-making for autonomous systems.​ (LP2 - Fall 2020/2021)
Extracting semantic representations from different observations​
My goal is to understand and find models that generalize and explain the human behaviors and intentions from observations during everyday activities. To achieve this, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to h​umanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator’s behavior via semantic representations. This abstraction from observations captures the “essence” of the activity, thereby indicating which aspect of the demonstrator’s actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties.
 
For more information look at these papers
We develop a novel method that generates compact semantic models for inferring human coordinated activities, including tasks that require the understanding of dual arms sequencing. These models are robust and invariant to observation from different executions styles of the same activity. Additionally, the obtained semantic representations are able to re-use the acquired knowledge to infer different types of activities. Furthermore, our method is capable to infer dual-arm co-manipulation activities and it considers the correct synchronization between the inferred activities to achieve the desired common goal. we have evaluated our semantic-based method on two different humanoid platforms, the iCub robot and REEM-C robot. 

For more information look at these papers:
Spatio-temporal Feature Learning and Semantic Rules
We develop a we present a two-stage framework that deal with the problem of automatically extract human activities from videos. First, for action recognition we employ an unsupervised state-of-the-art learning algorithm based on Independent Subspace Analysis (ISA). This learning algorithm extracts spatio-temporal features directly from video data and it is computationally more efficient and robust than other unsupervised methods. Then we emply a second stage, which define a new method to automatically generate semantic rules that can reason about human activities. This method was done in collaboration with Prof. Dr. Byoung-Tak Zhang​'s laboratory.

For more information look at this paper: Enhancing Human Action Recognition through Spatio-temporal Feature Learning and Semantic RulesK Ramirez-Amaro, ES Kim, J Kim, BT Zhang, M Beetz, G Cheng, 13th IEEE-RAS International Conference​ on Humanoid Robots​, 2013.
​​

Published: Wed 17 Jun 2020.