Seminar
Open for registration

CHAIR Partner Seminar: AI security and privacy across different industries

In this seminar series, we welcome Chalmers' AI researchers and researchers from CHAIR's Core partners – Volvo Cars, Volvo Group, Sahlgrenska University Hospital, Ericsson and Zeekr – to discuss  AI research topics that engage all of us.

Overview

Open for registration
Registration (Opens in new tab)

Each seminar includes a networking session to promote exchange of ideas, not only from presenter to listener, but between all participants. 
Our ambition is to support a collaborative environment between CHAIR and our core partners, and learn from each other.

Moderator: Fredrik Johansson, CHAIR.

 

Agenda 15 October:

10:00-10:20

AI security: privacy-preserving techniques

Alexandre Graell I Amat is a Professor with the Communication Systems group. His research interests are in the area of (modern) coding theory and cover a broad range of topics, including distributed storage, caching, and distributed computing, and optical communications.

This seminar explores cutting-edge research in AI security, focusing on privacy-preserving techniques such as federated learning, privacy auditing methods to reveal vulnerabilities and assess the privacy of AI models, and strategies to improve robustness against adversarial attacks. Applications span healthcare, finance, and other sensitive domains.

 

10:20-10:40

AI security and data privacy in healthcare

Magnus Kjellberg, Sahlgrenska, has more than 20 years of experience in data analytics and AI, mainly from the life science and health care sectors. He has been responsible for AI at Sahlgrenska University Hospital since 2021 as head of the AI Competence Center. He has authored the data and AI strategy for Region Västra Götaland and is involved in several national and international initiatives concerning data and AI.

 

10:40-11:00

AI security: federated machine unlearning

Ayush Kumar Varshney is a WASP-Experienced researcher with Ericsson. His interests are in the area of data privacy, federated learning, foundation models and machine unlearning.  

This seminar explores auditable machine unlearning solutions for federated unlearning. We explore user’s/institution’s right to remove their data and its contributions from an already trained machine learning model in federated settings. We examine both exact and approximate unlearning approaches, comparing their effectiveness, efficiency, and auditability. We focus on privacy-preserving solutions when high number of clients participate in horizontal federated learning as well as when there are few institutions such as hospitals, banks collaborate to train a joint model.

 

11:00-11:30

Networking session with coffee

 

11:30-12:00

Discussion: AI security and privacy across different industries; talks and reflections on the presentations.