Title: Transparency and Trust in Humanoid Robots: A Case for Non-Verbal Signals
It is a common idea that robots need to be completely transparent for people to trust them. Although transparency is a worthwhile goal in itself, I want to argue that, in practice, it has little to do with trust. Most AI systems are too complex to understand without substantial effort and it is not likely that most people will want to spend much time to understand the inside of a robot control system. Instead, I propose that as much as possible of the operation of a robot should take place in the external world, that is accessible to both robots and humans, using mostly local information, and that robot behaviours should aim to communicate the goals and intentions of the robot to facilitate the prediction of its behavior. Just like we trust other people, not because the know exactly how their brains work, but because we are confident that we can predict their behaviours, robots can use non-verbal cues to become more predictable. I will report on current research towards this aim.
14 September, 2021, 13:15