20 Nov 2020, 13.00 (Swedish time)
Title: Learning from vision and touch for robotic grasping and manipulation
Robots are envisioned as capable machines who easily navigate and interact in a world built for humans. However, looking around us we see robots mainly confined to factories only performing repetitive tasks in environments built such as to circumvent their limitations. The central question of my research is how we can create robots that are capable of adapting such that they can co-inhabit our world. This means designing systems that are capable of functioning in unstructured environments that are continuously changing with unlimited combination of shapes, sizes, appearance, and positions of objects, able to understand, adapt and learn from humans, and importantly do so from small amounts of data.
In specific my work focuses on grasping and manipulation, fundamental aspects to enable a robot to interact with humans in our environment, along with dexterity (e.g. to use objects/tools successfully) and high-level reasoning (e.g. to decide about which object/tool to use). Despite decades of research, robust autonomous grasping and manipulation approaching human skills remains an elusive goal. One main difficulty lies in dealing with the inevitable uncertainties in how a robot perceives the world. Our environment is dynamic, has a complex structure and sensory measurements are noisy and associated with a large degree of uncertainty which poses challenges to avoid failures.
In my research I have developed methodologies that enable a robot to interact with natural objects and learn about object properties and relations between tasks and sensory streams. I have developed tools that allow a robot to use multiple streams of sensory data in a complementary fashion. In this talk I will specifically address how a robot can use vision and touch to address grasp related questions e.g. estimating unknown object properties such as shape, grasp planning, analysis and adaptation.
Registration for this event will open in November
About the speaker:
Yasemin Bekiroglu is an Assistant Professor in the Automatic Control research group. She completed her Ph.D. at the Royal Institute of Technology (KTH) in 2012. Her research is focused on data-efficient learning from multisensory data for robotics applications. She received the Best Paper Award at IEEE International Conference on Robotics and Automation for Humanitarian Applications (RAHA) in 2016 and the Best Manipulation Paper Award at IEEE International Conference on Robotics and Automation (ICRA) in 2013, and was IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) CoTeSys Cognitive Robotics Best Paper Award Finalist in 2013. She serves as a reviewer for robotics conferences and journals.
About CHAIR Spotlight on Research
Chalmers AI Research Centre, CHAIR Spotlight on Research is a series of AI short talks hosting researchers from Chalmers. The seminars are targeted towards experts from the Chair Consortium core partners as well as other Chalmers researchers.
Our aim is to increase awareness of AI at Chalmers between Chalmers researchers and AI experts in industry. In the seminars, speakers present an overview of their current research and thoughts for new research, ideas, challenges – anything they believe to be of interest for other researchers. The seminar is taking place online and is scheduled to contain 30 minutes of presentation and 15 minutes of discussion.
The first series of CHAIR Spotlight Research talks have the theme “New Chalmers researchers on the spot!”, meaning that researchers that came to Chalmers in the last three years are giving these talks.
The seminars are open to all and are free of charge. CHAIR Spotlight Research talks are taking place on Fridays 13:00-13:45.
20 November, 2020, 13:00
20 November, 2020, 13:45