Date: Wednesday 13 October 2021, 14:00-18:00
Place: Världskulturmuseet, Göteborg, Sweden (Room Studion)
Organiser: AI Ethics committee at CHAIR (Chalmers AI Research Centre)
Workshop Chair: Gerardo Schneider (University of Gothenburg)
Event Coordinator: Rebecka Bergström Bukovinszky (Världskulturmuseet)
Moderator: Olle Häggström (Chalmers, Sweden)
Prof. Devdatt Dubhashi (Chalmers, Sweden)
Prof. Amanda Lagerkvist (Uppsala University, Sweden)
Prof. Barbara Plank (IT University of Copenhagen, Denmark)
Prof. Moshe Vardi (Rice University, USA)
ABOUT THE SPEAKERS:
Name: Devdatt Dubhashi
Affiliation: Department of Computer Science and Engineering, Chalmers
Short Bio: Devdatt Dubhashi is a Professor in the Data Science and AI Division in the Department of Computer Science and Engineering, Chalmers. He received his Ph.D. in Computer Science from Cornell University USA and was a postdoctoral fellow at the Max Planck Institute for Computer Science in Saarbrücken, Germany. He was with BRICS (Basic Research in Computer Science, a center of the Danish National Science Foundation) at the University of Aarhus and then on the faculty of the Indian Institute of Technology (IIT) Delhi before joining Chalmers in 2000. He has led several national projects in machine learning and has been associated with several EU projects. He has been an external expert for the OECD report on “Data Driven Innovation”. He features regularly at the main machine learning venues such as NeurIPS, ICML, AAAi and IJCAI.
Title of Presentation: “Elysium or Christiania? AI, Automation and the Future of Work”
Abstract: While AI has has already showing transformative effects across whole spectrum of industries generating huge economic value, concerns have been raised about the potentially disruptive effects of AI and automation on the economy and society at large. Some studies have expressed concerns that up to 40 or 50 percent of current jobs could be lost to automation while others are sanguine that new jobs will be created to replace them. We give a technological perspective from the current state of the art in AI and lead up to reflections on what we as individuals, companies and the Government could do towards a new social contract for the future of work.
Name: Amanda Lagerkvist
Affiliation: Department of Informatics and Media, Uppsala University (Sweden)
Short Bio: Amanda Lagerkvist is Professor of Media and Communication Studies in the Department of Informatics and Media at Uppsala University. She is principal investigator of the Uppsala Informatics and Media Hub for Digital Existence: https://www.im.uu.se/research/hub-for-digtal-existence As Wallenberg Academy Fellow (2014-2018) she founded the field of existential media studies. She heads the project: “BioMe: Existential Challenges and Ethical Imperatives of Biometric AI in Everyday Lifeworlds” funded by the Marianne and Marcus Wallenberg Foundation (within WASP-HS: http://wasp-hs.org), in which her group studies the lived experiences of biometric AI, for example voice and face recognition technologies. In her a monograph, Existential Media: A Media Theory of the Limit Situation (forthcoming with OUP in March 2022) she introduces Karl Jaspers’ existential philosophy of limit situations for media theory, in the context of the increasing digitalization of death and automation of the lifeworld. She has recently published “Digital Limit Situations: Anticipatory Media Beyond ‘the New AI Era,’” Journal of Digital Social Research, 2:3, 2020. For more information about activities and key publications and output in the field of existential media studies, see: https://www.im.uu.se/research/hub-for-digtal-existence/output-and-publications-in-existential-media-studies/
Title of the Presentation: “Body Stakes: Beyond the ‘Ethical Turn’ – Toward An Existential Ethics of Care in Living with Automation”
This talk discusses the key existential stakes of implementing biometrics in human lifeworlds. It introduces an existential ethics of care – through a conversation between existentialism, virtue ethics and a feminist ethics of care – that sides with and never leaves the vulnerable human body, while recognizing human diversity and the plurality of lived experience of technology. This implies moving beyond the “ethical turn,” by revisiting basic questions about what it means to be human, mortal and embodied, in order to safeguard existential needs and necessities in an age of increased automation. Recognizing that biometrics has beneficial affordances, the key argument is nevertheless that it implicates humans through unprecedented forms of objectification, through which the existential body – the frail, finite, concrete and unique human being – is at stake. Zooming in on three sites where this is currently manifest, the presentation will stress that an existential ethics of care is importantly not a solutionist list of principles or suggestions, but a way of thinking about the ethical challenges of living with biometrics in today’s world. The focus on basic existential stakes within human lived experience, should serve as the foundation on which comprehensive frameworks can be built to address the complexities and prospects for ethical machines, responsible biometrics and AI.
Name: Barbara Plank
Affiliation: IT University of Copenhagen, Denmark
Short Bio: Barbara Plank is Professor in the Computer Science Department at ITU (IT University of Copenhagen). She is also the Head of the Master in Data Science Program. She received her PhD in Computational Linguistics from the University of Groningen. Her research interests focus on Natural Language Processing, in particular transfer learning and adaptation, learning from beyond the text, and in general learning under limited supervision and fortuitous data sources. She (co)-organised several workshops and international conferences, amongst which the PEOPLES workshop (since 2016) and the first European NLP Summit (EurNLP 2019). Barbara was general chair of the 22nd Northern Computational Linguistics conference (NoDaLiDa 2019) and workshop chair for ACL in 2019. Barbara is member of the advisory board of the European Association for Computational Linguistics (EACL) and vice-president of the Northern European Association for Language Technology (NEALT).
Title of the Presentation (preliminary): “Ethics, AI, Technology and Society: Perspectives from Natural Language Processing”
Abstract: Deep Learning (DL) has revolutionized research fields of AI like NLP, speech processing and computer vision. This is evidenced in popular commercial products such as digital assistants, which have entered our homes. Many of these advances are fueled by advances in large pre-trained language models. In this talk I will argue that the current view is myopic. There are many challenges ahead despite these recent advances. Most of these challenges are due to the rich variability of language, and the dreadful lack of resources. I will outline some possible ways to go about these, by drawing upon recent research and discuss how various types of bias affect NLP.
Name: Moshe Vardi
Title: George Distinguished Service Professor
Affiliation: Department of Computer Science, Rice University (USA)
Short Bio: Moshe Y. Vardi is a University Professor, and the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University. He is the author and co-author of over 650 papers, as well as two books. He is the recipient of several scientific awards, is a fellow of several societies and a member of several honorary academies. He holds seven honorary doctorates. He is a Senior Editor of Communications of the ACM, the premier publication in computing, focusing on societal impact of information technology.
Title of the Presentation (preliminary): “Ethics Washing in AI”
Abstract: Over the past decade Artificial Intelligence, in general, and Machine Learning, in particular, have made impressive advancements, in image recognition, game playing, natural-language understanding and more. But there were also several instances where we saw the harm that these technologies can cause when they are deployed too hastily. In response to that, there has been much recent talk of AI ethics . But talk is cheap. "Ethics washing" — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. I will argue that the ethical lense is too narrow. The real issue is how to deal with technology's impact on society. Technology is driving the future, but who is doing the steering?
Name: Olle Häggström
Affiliation: Department of Mathematical Sciences, Chalmers (Sweden)
Short Bio: Olle Häggström is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences. The bulk of his research qualifications are in probability theory, but in recent years he has shifted focus towards existential risk and AI safety. He has worked on AI policy at both the national and the EU level, as well as in the World Economic Forum.