Seminar
The event has passed

Research seminar with Giovanni Cina

Assistant professor Giovanni Cina from University of Amsterdam will give a talk titled "When accurate prediction models used for decision support end up harming patients".

Overview

The event has passed

Abstract

Prediction models are popular in medical research and practice. Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions, and they are frequently lauded as instruments for personalised, data-driven healthcare. We show, however, that using prediction models to automate decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model. Our main result is a formal characterization of such prediction models. Next, we show that models that are well calibrated before and after deployment are useless for decision-making, as they make no change in the data distribution. In a second part of the talk, we will move from automation to decision support, and discuss how, even in the presence of a human decision maker, the introduction of a prediction model can lead to undesirable outcomes. These results call for a reconsideration of standard practices for validation and monitoring of prediction models that are used in medical decisions, as well as for training of users of such models.

Short bio

Giovanni Cina is an Assistant Professor in Responsible Medical AI at the Medical Informatics department and at the Institute for Logic, Language and Computation, at the University of Amsterdam. He worked several years at Pacmed on evaluating clinical AI.

Fredrik Johansson
  • Associate Professor, Data Science and AI, Computer Science and Engineering
Research seminar with Giovanni Cina | Chalmers