Seminar
The event has passed

Generalist Foundation Models for Safe and Reliable Clinical Decision Support

Speaker Sana Tonekaboni from the Broad Institute of MIT and Harvard

Overview

The event has passed
  • Date:Starts 2 February 2026, 14:00Ends 2 February 2026, 15:00
  • Location:
    Analysen, EDIT-huset
  • Language:English

If you cannot make it in person, you can join online via zoom (pw:monday).
If you wish to meet with Sana, please enter your time in the desired time slot in this document 

Abstract

The last few years have changed the shape of clinical AI: we’re moving from single-purpose models built for one prediction task to “generalist” foundation models that can be reused across many workflows, modalities, and decisions. That shift is exciting but in medicine it immediately raises hard practical questions: when can we trust these systems, how do we understand what they’re using from messy multimodal data, how do we know when they’re uncertain or out of distribution, and how do we evaluate risks like memorization and data leakage when the training data are sensitive by default. This talk draws on my recent work to make generalist clinical AI more dependable in practice, developing interpretable multimodal representations, estimating uncertainty in pre-trained models, and measuring privacy and leakage risks in medical foundation models, so that clinical decision support can be not just powerful, but reliably safe and usable.

Richard Beckmann
  • Postdoc, Data Science and AI, Computer Science and Engineering