Workshop
The event has passed

Responsible AI

This workshop is organized in connection with the conferral of an Honorary Doctorate to Ricardo Baeza-Yates at the University of Gothenburg.

Responsible AI is an increasingly important field, and Professor Baeza-Yates has played a leading role in several international initiatives in this area. The workshop will bring together international, national, and local researchers to discuss current developments and explore opportunities for future collaboration.

Overview

The event has passed
  • Date:Starts 15 October 2025, 08:30Ends 15 October 2025, 17:00
  • Seats available:70
  • Location:
    Room SB-H6 and zoom (password: 247500)
  • Language:English
  • Last sign up date:10 October 2025
Registration (Opens in new tab)

Program

8:30-9:00: Registration and coffee

9:00-9:05: Brief introduction 

9:05 -10:00: Ricardo Baeza Yates, KTH, U. Pompeu Fabra and Chalmers: “The Limitations of Data, ML & Us”

10:00 -10:45: Corinna Coupette, Aalto University, Finland:  “Responsible AI and Law: A Complex-Systems Perspective”

10:45 -11:30: Aris Gionis, KTH: “Fairness and diversity in data summarization: theory and applications”

11:30-11:50: Discussion

12:00-12:45: Lunch 

1:00-1:45 : Miriam Fernandez, Open University, UK: “Responsible AI for the protection of women”

1:45-2:30 : Francesco Bochi, Centai, Italy: “Randomization for Algorithmic Fairness”

2:30-3:00: Fika (coffee and cakes) 

3:00–3:45: Daniele Quercia, Kings College London and Nokia Bell Labs Cambridge: “Addressing Misconceptions: Dispelling Myths in Responsible AI Practices”

3:45- 4:30: Devdatt Dubhashi, Chalmers, “AI2027 and Responsible AI”

4:30-5:00: Panel and Concluding discussion

 A simple vegetarian lunch will be offered as well as fika breaks.

Abstracts

R. Baeza-Yates: The Limitations of Data, Machine Learning & Us

Machine learning (ML), particularly deep learning, is being used everywhere. However, not always is used well, ethically and scientifically. In this talk we first do a deep dive in the limitations of supervised ML and data, its key component. We cover small data, datification, bias, predictive optimization issues, evaluating success instead of harm, and pseudoscience, among other problems.  The second part is about our own limitations using ML, including different types of human incompetence: cognitive biases, unethical applications, no administrative competence, misinformation, and the impact on mental health. In the final part we discuss regulation on the use of AI and responsible AI principles, that can mitigate the problems outlined above.

F. Bonchi: Randomization for Algorithmic Fairness

Algorithmic decision-making has become pervasive in high-stakes domains such as health, education, and employment. This widespread adoption raises crucial concerns about the fairness of the algorithms adopted. In this talk, I will delve into a recent research line that explores individual fairness in combinatorial optimization problems, where many valid solutions may exist to a given problem instance. Our proposal, named
distributional max-min fairness, leverages the power of randomization to maximize the expected satisfaction of the most disadvantaged individuals. The talk will highlight applications across fundamental algorithmic challenges, including matching, ranking, and shortest-path queries.

C. Coupette: Responsible AI and Law: A Complex-Systems Perspective

The relationship between Responsible AI and law is complicated: On the one hand, law is mostly regarded as a cornerstone of Responsible AI, but designing and implementing effective regulation has proved challenging. On the other hand, AI is often positioned as a potential remedy for problems in legal practice, but not all current uses of AI in the legal domain can be characterized as responsible. In this talk, I elucidate the interplay between Responsible AI and law through the lens of complexity science. I sketch what it means to view AI systems and legal systems as complex systems, and I discuss the implications of this perspective for our efforts to make Responsible AI a reality.

D. Dubhashi, AI2027 and Responsible AI

A recent report called “AI2027” has caught a lot of headlines recently about alleged extreme risks of AI I the very near future. We will discuss sceptically the claims in the report and comment in the context of “responsible AI”.

M. Fernandez: Responsible AI for the protection of women

In this talk, I cover the work that we are doing with the Centre for Protecting Women Online (https://university.open.ac.uk/centres/protecting-women-online/). This centre tries to address technology-facilitated gender-based violence from a multidisciplinary point of view (law, psychology, technology, policing). I cover some of the problems of existing technologies that particularly affect women, some of the solutions we are proposing, and the struggles in terms of law (particularly the lack of coverage of many of these harms) and policing practices
 
A.Gionis Fairness and diversity in data summarization: theory and applications

How can we select small but representative sets of data, search results, or news articles that are relevant but also ensure fairness and diversity criteria? In this talk we will present recent advances in algorithms for fair and diverse summarization across different domains. First, we study fairness in clustering problems, where selected representatives must proportionally reflect different groups in the data. We design methods with approximation guarantees under standard complexity assumptions. Second, we introduce sequential diversification, a new framework that captures how users consume ranked lists and we develop algorithms with provable guarantees for maximizing diversity in sequential data. Finally, we examine news aggregation, where ensuring a balanced coverage requires going beyond source diversity to capture the full range of viewpoints. Across these settings, we develop principled algorithms and validate them on real-world datasets.

D. Quercia: “Addressing Misconceptions: Dispelling Myths in Responsible AI Practices”

In this talk, Daniele will dive deep into debunking some prevalent myths surrounding responsible AI, crucial for informed decision-making. By challenging these misconceptions, we can pave the way for ethical and effective AI practices.
 
Controversial opinions and discussion heavily encouraged – themes include:
• The use of impact assessments.
• The use and components of risk scoring.
• AI will take your job.
• AI and human intelligence.
• AI regulation will stifle innovation.
• Bias should always be eliminated.
• AI will be a competitive advantage.

Short Bios

Ricardo Baeza-Yates is a a part-time WASP Professor at KTH Royal Institute of Technology in Stockholm, as well as part-time professor at the departments of Engineering of Universitat Pompeu Fabra in Barcelona and Computing Science of University of Chile in Santiago. Before, he has been Director of Research at the Institute for Experiential AI of Northeastern University in its Silicon Valley campus (2021-25) and  VP of Research at Yahoo Labs, based first in Barcelona, Spain, and later in Sunnyvale, California (2006-16). He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow. He has won national scientific awards in Chile (2024) and Spain (2018), among other accolades and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, and his areas of expertise are responsible AI, web search and data mining plus data science and algorithms in general.

Francesco Bonchi is the Co-Founder and Research Director at CENTAI (Center for Artificial Intelligence) in Turin, Italy. He also holds a part-time position at Eurecat (Technological Center of Catalunya) in Barcelona, Spain. He also serves on the AI task-force of the Italian Government and is a member of the Board of Directors of the Anti-Financial Crime Digital Hub in Turin. Previously, he was the Scientific Director at the ISI Foundation in Turin and the Director of Research at Yahoo Labs in Barcelona.

Dr. Bonchi's recent research interests encompass algorithms and learning on complex networks, fair and explainable AI, and the broader domain of trustworthiness and ethical aspects of data science and AI. He has authored over 250 publications
in these fields, earning several Best Paper Awards, including at the prestigious World Wide Web Conference 2022. Additionally, he holds 9 US patents, which earned him the 2013 Yahoo Master Inventor Award. More information at: www.francescobonchi.com

Corinna Coupette (they/she) is an Assistant Professor of Computer Science at Aalto University, a Guest Researcher at the Max Planck Institute for Informatics, a Research Affiliate at the Max Planck Institute for Tax Law and Public Finance, a CodeX Affiliate, and a member of ELLIS. They studied law at Bucerius Law School and Stanford Law School (2010–2015) and Computer Science at LMU Munich and Saarland University (2015–2020). Corinna completed their PhD in law (Dr. iur., summa cum laude) at the Max Planck Institute for Tax Law and Public Finance (Dr. iur. 2018, summa cum laude) and their PhD in Computer Science at the Max Planck Institute for Informatics (Dr. rer. nat. 2023, summa cum laude). They have received numerous awards for their research and service, including the ERC Starting Grant 2025 for the project CompLex: Toward a Computational Theory of Legal Complexity.

Miriam Fernandez is a Professor of Responsible Artificial Intelligence at the Knowledge Media Institute (KMi), Open University (OU), UK. Her research agenda revolves around advancing Responsible AI, ensuring that technological innovation aligns with ethical principles and societal values. Her pioneering work spans diverse domains, from algorithmic transparency and fairness to the societal implications of AI deployment. By integrating AI techniques with a human-centred approach, she fosters solutions that prioritise social responsibility, transparency, and inclusivity. With a portfolio of more than 100 scientific articles in some of the best conferences and journals in her field, and having won numerous external grants supporting her research, Professor Fernandez has significantly influenced the discourse in the field of technology development and its impact on society. Her commitment to education is demonstrated through her leadership of OUAnalyse, a strategic initiative leveraging machine-learning methods for the early identification of students at risk. This technology, currently supporting the Open University’s 200K student body, has been highly awarded for its transformative impact on student outcomes. Professor Fernandez is also Equality and Diversity Champion for both KMi and the OU, where she leads the Responsible AI stream of the Center for Protecting Women Online, a flagship initiative that plays a critical role in mitigating the harmful effects of technology on women and girls worldwide.

Aristides Gionis is a WASP professor in KTH Royal Institute of Technology, Sweden. He obtained a PhD from Stanford University. He has been a professor in Aalto University, a visiting professor in the Sapienza University of Rome, and a research scientist in Yahoo! Research. He has contributed to several areas of data science, including data clustering and summarization, graph mining and social-network analysis, analysis of data streams, and fairness and interpretability in machine learning. His current research is funded by the Wallenberg AI, Autonomous Systems and Software Program (WASP), by the European Commission with an ERC Advanced grant, and by the Swedish Research Council (VR).

Daniele Quercia is Director of Responsible AI at Nokia Bell Labs Cambridge (UK) and Professor of Computer Engineering at Politecnico di Torino. He has been named one of Fortune magazine’s 2014 Data All-Stars, and spoke about “happy maps” at TED. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London.

 

The workshop is sponsored by CHAIR (Chalmers Artificial Intelligence Research Centre) and Chalmers Area of Advance ICT 

Devdatt Dubhashi
  • Head of Unit, Data Science and AI, Computer Science and Engineering
Gerardo Schneider
  • Head of Division, Data Science and AI, Computer Science and Engineering