When humans carry out a simple task, such as setting a table, we might approach the challenge in several different ways, depending on the conditions. If a chair unexpectedly stands in the way, we could choose to move it or walk around it. We alternate between using our right and left hands, we take pauses, and perform any number of unplanned actions.
But robots do not work in the same way. They need precise programming and instructions all the way to the goal. This approach makes them very efficient in environments where they constantly follow the same pattern, such as factory processing lines. But to successfully interact with people in areas such as healthcare or customer facing roles, robots need to develop much more flexible ways of working.
“In the future we foresee robots accomplish some basic household activities, such as setting and cleaning a table, placing kitchen utensils in the sink, or help organizing groceries,” says Karinne Ramirez-Amaro, Assistant Professor at the Department of Electrical Engineering.
The Chalmers University researchers wanted to investigate whether it was possible to teach a robot more humanlike ways to approach solving tasks – to develop an ‘explainable AI’ that extracts general instead of specific information during a demonstration, so that it can then plan a flexible and adaptable path towards a long-term goal. Explainable AI (XAI) is a term that refers to a type of artificial intelligence where humans can understand how it arrived at a specific decision or result.
Teaching a robot to stack objects under changing conditions
The researchers asked several people to perform the same task – stacking piles of small cubes – twelve times, in a VR environment. Each time the task was performed in a different way, and the movements the humans made were tracked through a set of laser sensors.
“When we humans have a task, we divide it into a chain of smaller sub-goals along the way, and every action we perform is aimed at fulfilling an intermediate goal. Instead of teaching the robot an exact imitation of human behavior, we focused on identifying what the goals were, looking at all the actions that the people in the study performed,” says Karinne Ramirez-Amaro.
The researchers' unique method meant the AI focused on extracting the intent of the sub-goals and built libraries consisting of different actions for each one. Then, the AI created a planning tool which could be used by a TIAGo robot – a mobile service robot designed to work in indoor environments. With the help of the tool, the robot was able to automatically generate a plan for a given task of stacking cubes on top of one another, even when the surrounding conditions were changed.
In short: The robot was given the task of stacking the cubes and then, depending on the circumstances, which changed slightly for each attempt, chose for itself a combination of several possible actions to form a sequence that would lead to completion of the task. The results were extremely successful.
"With our AI, the robot made plans with a 92% success rate after just a single human demonstration. When the information from all twelve demonstrations was used, the success rate reached up to 100%," says Maximilian Diehl.
The work was presented at the robot conference IROS 2021, one of the world’s most prestigious conferences in robotics. In the next phase of the project, the researchers will investigate how robots can communicate to humans and explain what went wrong, and why, if they fail a task.
Industry and healthcare
The long-term goal is to use robots in the industry to help technicians with task that could cause long-term health problems, for example tightening bolts/nuts on truck wheels. In healthcare, it could be tasks like bringing and collecting medicine or food.
“We want to make the job of healthcare professionals easier so that they can focus on tasks which need more attention,” says Karinne-Ramirez Amaro.
"It might still take several years until we see genuinely autonomous and multi-purpose robots, mainly because many individual challenges still need to be addressed, like computer vision, control, and safe interaction with humans. However, we believe that our approach will contribute to speeding up the learning process of robots, allowing the robot to connect all of these aspects and apply them in new situations”, says Maximilian Diehl.
Text: Sandra Tavakoli and Karin Wik
The research was carried out in collaboration with with Chris Paxton, a research scientist at NVIDIA. This project was supported by Chalmers AI Research Centre (CHAIR).
Watch the film explaining the research Automated Generation of Robotic Planning Domains from Observations - YouTube
For more information, contact:
Maximilian Diehl. PhD Student at the Department of Electrical Engineering
diehlm@chalmers.se
+46 31 772 171
Karinne Ramirez-Amaro, Assistant professor at the Department of Electrical Engineering
karinne@chalmers.se
+46 31 772 10 74