Human-robot collaboration is one of the most promising applications of autonomous robots. For this, robots need to interact with humans in a meaningful, flexible and adaptable manner, especially when new situations are faced. Currently, the new emerging technologies such as virtual reality and wearable devices allow capturing natural human movements of multiple users. Then, the next generation of learning methods should take advantage of and bootstrap the learning of new activities by adapting to the massive processing of information obtained from these enhanced sensors.
The goal of this PhD project is to develop semantic-based learning algorithms able to cope with a large amount of data generated from multi-modal/multi-level sensors and react to dynamically changing environments in real-time to produce robots with enhanced autonomy levels and manipulation capabilities. The massive collection of data will also contain information about the different styles of human demonstrations which will help in developing a more human-centred control solution.
Page manager Published: Tue 24 Nov 2020.
Please fill in a message
Send message
Thanks! We have received your message. If you have left your email address, you will receive a response from the editor-in-chief within 2-3 working days.