Maximilian Diehl, Electrical engineering

​Title: Explainable and Interpretable Decision Making for Robotic Tasks


​Maximilian Diehl is a PhD student in the research group mechatronics, Division of Systems and Control
Discussion leader is Dr. Daniel Leidner, German Aerospace Center (DLR)
Main supervisor is Assoc. professor Karinne Ramirez-Amaro, Division of Systems and Control
Examiner is professor Jonas Sjöberg, Division of Systems and Control​

Abstract
Future generations of robots, such as service robots that support humans with household tasks, will be a pervasive part of our daily lives. The human's ability to understand the decision-making process of robots is thereby considered to be crucial for establishing trust-based and efficient interactions between humans and robots. In this thesis, we present several interpretable and explainable decision-making methods that aim to improve the human's understanding of a robot's actions, with a particular focus on the explanation of why robot failures were committed.

In this thesis, we consider different types of failures, such as task recognition errors and task execution failures. Our first goal is an interpretable approach to learning from human demonstrations (LfD), which is essential for robots to learn new tasks without the time-consuming trial-and-error learning process. Our proposed method deals with the challenge of transferring human demonstrations to robots by an automated generation of symbolic planning operators based on interpretable decision trees. Our second goal is the prediction, explanation, and prevention of robot task execution failures based on causal models of the environment. Our contribution towards the second goal is a causal-based method that finds contrastive explanations for robot execution failures, which enables robots to predict, explain and prevent even timely shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). Since learning causal models is data-intensive, our final goal is to improve the data efficiency by utilizing prior experience. This investigation aims to help robots learn causal models faster, enabling them to provide failure explanations at the cost of fewer action execution experiments. 

In the future, we will work on scaling up the presented methods to generalize to more complex, human-centered applications.

Category Licentiate seminar
Location: EC, Lecture hall, Hörsalsvägen 11
Starts: 05 December, 2022, 10:00
Ends: 05 December, 2022, 12:00

Page manager Published: Fri 11 Nov 2022.