Workshop
The event has passed

Interpretable approaches in human-machine interaction

This workshop will be focused on transparent, safe, and accountable AI methods in the field of human-machine interaction. It is organized by the CHAIR theme Interpretable AI.

Overview

The event has passed
  • Date:Starts 24 October 2023, 08:30Ends 25 October 2023, 17:00
  • Seats available:30
  • Location:
    Chalmers Conference Centre, Room Scania, Chalmersplatsen 1
  • Language:English

Description

Focusing on interpretable methods, i.e., methods whose inner workings are human-understandable, ideally even to a non-expert, the aim of the workshop is to bring together researchers who are interested in applying such methods in AI applications involving high-stakes decision-making, for example in medicine, robotics, finance, and education.

The terms interpretability and explainability are sometimes used interchangeably. Note that, for this conference, we make a clear distinction between those two terms, defining interpretable systems as indicated above (see also, for example, articles by Rudin or Wahde and Virgolin), whereas explainability is defined as attempts to explain the decisions taken by (or the inner workings of) black-box models, such as deep neural networks. For this workshop, our focus is on interpretability.

Relevant topics include:

  • Interpretability in conversational AI
  • Interpretability in human-robot interaction
  • Interpretable and explainable methods in robotics
  • Methods for visualizing interpretable AI methods and their decision-making
  • Performance comparison between interpretable methods and black box methods.
  • Applications of interpretable human-machine interaction

Programme

Day 1: Oct. 24

08.30-09.00 Registration and coffee (outside room Scania at Chalmers Conference Centre)
09.00-09.15 Welcome message (Karinne Ramírez-Amaro and Mattias Wahde)
09.15-10.00 Mattias Wahde: Why interpretability?

10.00-10.30 Coffee break

10.30-11.30 Invited talk: Prof. Elin Anna Topp, Lund University. Supporting users in supporting systems.
11.35-11.55 Karinne Ramírez-Amaro: Interpretable AI + Robotics

12.00-13.30 Lunch (Hyllan, Chalmers Conference Centre)

13.30-13.50 Maximilian Diehl: Interpretable Decision-Making for Robots.
13.55-14.15 Minerva Suvanto: Interpretable text classification.

14.30-15.00 Coffee break

15.00-15.20 Isacco Zappa: Cobots Understanding Skills Programmed by Demonstration.
15.30-16.45 Interactive poster session

Day 2: Oct. 25

08.30-09.00 Registration and coffee (outside room Scania at Chalmers Conference Centre)
09.00-10.00 Invited talk: Prof. Jim Törresen (University of Oslo): Human Intuition and its Impact on Human–Robot Interaction Regarding Safety and Accountability.
10.05-10.25 Marco Della Vedova: Interpretable AI methods for assessing naturalness of forests.

10.30-11.00 Coffee break

11.00-12.00 Invited talk: Prof. Thomas Hellström (Umeå University): Understandable robots - What, why, and how?
12.00-12.05 Short break
12.05-12.25 Neil Walkinshaw: Causal Testing of Scientific Software Models.
12.30-12.50 Alexander Berman: Can large language models explain interpretable models?

13.00-14.30 Lunch

14.30-14.50 Vasiliki Kondyli: Grounding Embodied Multimodal Interaction: Towards Behaviourally Established Semantic Foundations for Human-Centered AI.
14.55-15.15 Jing Zhang: Combining Interpretability and Black-Box Approaches: Integrating Symbolic Planning with Hierarchical RL.

15.30-16.00 Coffee break

16.00-16.20 Sabino Roselli: Conflict-free routing of mobile robots.
16.20-16.30 Closing words (Karinne Ramírez-Amaro and Mattias Wahde)

External speakers – titles, abstracts and biographies

Registration includes:

  • Participation in the conference session (on-site only).
  • Lunch at Chalmers’ restaurant Hyllan on both days.
  • A PDF with all the presentations from the workshop (which will be sent out after the conference).

The event is fully booked.

 

Interpretable AI

Interpretable AI is an emerging field, focused on developing AI systems that are transparent and understandable to humans.