Events: Informations- och kommunikationsteknikhttp://www.chalmers.se/sv/om-chalmers/kalendariumUpcoming events at Chalmers University of TechnologyThu, 23 Sep 2021 19:26:21 +0200http://www.chalmers.se/sv/om-chalmers/kalendariumhttps://www.chalmers.se/en/areas-of-advance/health/calendar/Pages/Workshop-diagnostics-imaging-and-AI.aspxhttps://www.chalmers.se/en/areas-of-advance/health/calendar/Pages/Workshop-diagnostics-imaging-and-AI.aspxWorkshop: Diagnostics, imaging and AI<p></p><p>Welcome to this workshop!​</p><div><br /></div> <div>Sahlgrenska University Hospital, Sahlgrenska Academy and Chalmers, via CHAIR and Health Engineering Area of Advance, are arranging a workshop to spark collaborations for research into diagnostics, imaging, and AI. The aim is to create new constellations that will drive important and necessary development for solving challenges in healthcare.<br /><br />The workshop will be a mix of short presentations from PIs (~5 min each), a coffee break with a poster exhibition, thematic discussions to identify research needs and joint interests, and a concluding joint lunch.​<br /><br />This workshop is held at Sahlgrenska University Hospital, Blå Stråket 5.<br /><br /></div> <div><div>Questions? Contact: <a href="mailto:magnus.kjellberg@vgregion.se">Magnus Kjellberg​</a> (magnus.kjellberg@vgregion.se)</div> <h2 class="chalmersElement-H2"><a href="https://forms.office.com/Pages/ResponsePage.aspx?id=VaJi_CBC5EebWkGO7jHaX9rTyo5oUE5MhBAD5UPV_d5UOVhLNzFJV0VaS1FZWFFMMEhFN0dXSzFBSS4u">Register here!​</a></h2></div> <div><br /></div> https://www.chalmers.se/en/centres/chair/events/Pages/Seminars-on-Ethics,-AI,-Technology-and-Society.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/Seminars-on-Ethics,-AI,-Technology-and-Society.aspxWorkshop on Ethics, AI, Technology and Society<p>Världskulturmuseet, Göteborg, Sweden (Room Studion)</p><p>What it the role of humans in a world with ever more powerful AI capabilities?</p><div><br />Welcome to a half-day meeting, open to researchers, decision makers and the general public aims to address crucial questions about the role of humans in a world with even more powerful AI capabilities. <span style="background-color:initial">I</span><span style="background-color:initial">n such a future, which decisions can we hand over to machines and which should remain the responsibility o</span><span style="background-color:initial">f </span><span style="background-color:initial">humans? When is the human-in-the-loop a viable concept? Are there domains which should be considered entirely off-limits for AI decision making? What will, in the long run, be the role of work? These questions will be addressed from a variety of perspectives including engineering, social science, philosophy and ethics.</span><br /></div> <div><div><br /></div> <div><a href="https://secure.tickster.com/sv/1dlpuvcujdcv98e/products" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here​</a></div> <div><a href="https://secure.tickster.com/sv/1dlpuvcujdcv98e/products"></a><br /><span></span><p class="MsoNormal" style="margin:0cm;font-size:11pt;font-family:calibri, sans-serif"><span style="font-size:10.5pt"></span></p> <span>​</span><span><strong>Date</strong>:</span><span> Wednesday 13 October 2021, 14:00-18:00 </span></div> <p><span><strong>Place</strong>:</span><span> Världskulturmuseet, Göteborg, Sweden (Room Studion)<br /></span><span style="background-color:initial"><strong>Organiser</strong>:</span><span style="background-color:initial"> AI Ethics committee at CHAIR (Chalmers AI Research Centre)<br /></span><span style="background-color:initial"><strong>Workshop Chair:</strong></span><span style="background-color:initial"> <a href="http://www.cse.chalmers.se/~gersch/">Gerardo Schneider​</a> (University of Gothenburg)<br /></span><span style="background-color:initial"><strong>Event Coordinator:</strong></span><span style="background-color:initial"> Rebecka Bergström Bukovinszky (Världskulturmuseet)<br /></span><span style="background-color:initial"><strong>Moderator:</strong></span><span style="background-color:initial"><strong> </strong>Olle Häggström (Chalmers, Sweden)</span></p> <p><span><strong>Speakers:</strong></span></p> <p><span>Prof. Devdatt Dubhashi (Chalmers, Sweden)<br /></span><span style="background-color:initial">Prof. Amanda Lagerkvist (Uppsala University, Sweden)<br /></span><span style="background-color:initial">P</span><span style="background-color:initial">rof. Barbara Plank (IT University of Copenhagen, Denmark)<br /></span><span style="background-color:initial">P</span><span style="background-color:initial">rof. Moshe Vardi (Rice University, USA)</span></p> <p><strong style="background-color:initial"><br /></strong></p> <p><strong style="background-color:initial">ABOUT THE SPEAKERS:</strong><br /></p> <p><span><strong>Name</strong></span><span>: <a href="/en/staff/Pages/dubhashi.aspx">Devdatt Dubhashi</a><br /></span><span style="background-color:initial"><strong>Title</strong></span><span style="background-color:initial">: Professor<br /></span><span style="background-color:initial"><strong>Affiliation</strong></span><span style="background-color:initial">: Department of Computer Science and Engineering, Chalmers<br /><br /></span></p> <p><span><strong>Short Bio</strong></span><span><strong>:</strong> Devdatt Dubhashi is a Professor in the Data Science and AI Division in the Department of Computer Science and Engineering, Chalmers. He received his Ph.D. in Computer Science from Cornell University USA and was a postdoctoral fellow at the Max Planck Institute for Computer Science in Saarbrücken, Germany. He was with BRICS (Basic Research in Computer Science, a center of the Danish National Science Foundation) at the University of Aarhus and then on the faculty of the Indian Institute of Technology (IIT) Delhi before joining Chalmers in 2000. He has led several national projects in machine learning and has been associated with several EU projects. He has been an external expert for the OECD report on “Data Driven Innovation”. He features regularly at the main machine learning venues such as NeurIPS, ICML, AAAi and IJCAI.</span></p> <p><span> </span></p> <p><span><strong>Title of Presentation</strong></span><span><strong>:</strong> “Elysium or Christiania? AI, Automation and the Future of Humanity”</span></p> <p><span> </span></p> <p><span><strong>Abstract</strong></span><span>: TBA</span></p> <p><span>——————————— </span></p> <p><span lang="EN-US"><strong>Name</strong></span><span lang="EN-US">:<span> </span><a href="https://eur02.safelinks.protection.outlook.com/?url=https://katalog.uu.se/profile/?id%3DN7-774&amp;data=04%7c01%7cRebecka.bukovinszky%40varldskulturmuseerna.se%7c3cf54ffed04845a46d7d08d974f82002%7cdbff2cb5008d4cff9ebd992326f4a621%7c0%7c0%7c637669431310872090%7cUnknown%7cTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7c1000&amp;sdata=l3C5bK8gK0wMasTx8fGFpdrD25kQw/7m8yoOl1uk2lo%3D&amp;reserved=0">Amanda Lagerkvist</a><br /></span><span lang="EN-US" style="background-color:initial"><strong>Title</strong></span><span lang="EN-US" style="background-color:initial">: Professor<br /></span><span lang="EN-US" style="background-color:initial"><strong>Affiliation</strong></span><span lang="EN-US" style="background-color:initial">: Department of Informatics and Media, Uppsala University (Sweden)</span></p> <p><span lang="EN-US" style="background-color:initial"><strong><br />Short Bio</strong></span><span lang="EN-US" style="background-color:initial"><strong>: </strong></span><span lang="EN-US" style="background-color:initial">Amanda Lagerkvist is Professor of Media and Communication Studies in the Department of Informatics and Media at Uppsala University. She is principal investigator of the Uppsala Informatics and Media Hub for Digital Existence: </span><a href="https://eur02.safelinks.protection.outlook.com/?url=https://www.im.uu.se/research/hub-for-digtal-existence&amp;data=04%7c01%7cRebecka.bukovinszky%40varldskulturmuseerna.se%7c3cf54ffed04845a46d7d08d974f82002%7cdbff2cb5008d4cff9ebd992326f4a621%7c0%7c0%7c637669431310872090%7cUnknown%7cTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7c1000&amp;sdata=hh3%2BXPZeFA9tk37bNhmV8676x0OGyStyk1joXsOaar4%3D&amp;reserved=0"><span lang="EN-US">https://www.im.uu.se/research/hub-for-digtal-existence</span></a><span style="background-color:initial"><span lang="EN-US"> </span></span><span lang="EN-US" style="background-color:initial">As Wallenberg Academy Fellow (2014-2018) she founded the field of existential media studies. She heads the project: “BioMe: Existential Challenges and Ethical Imperatives of Biometric AI in Everyday Lifeworlds” funded by the Marianne and Marcus Wallenberg Foundation (within WASP-HS: </span><a href="https://eur02.safelinks.protection.outlook.com/?url=http://wasp-hs.org/&amp;data=04%7c01%7cRebecka.bukovinszky%40varldskulturmuseerna.se%7c3cf54ffed04845a46d7d08d974f82002%7cdbff2cb5008d4cff9ebd992326f4a621%7c0%7c0%7c637669431310882044%7cUnknown%7cTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7c1000&amp;sdata=rlw8FccCdd3nYnew4JD121vPrXTsQQ2G4/1RNQWBXuY%3D&amp;reserved=0"><span lang="EN-US">http://wasp-hs.org</span></a><span lang="EN-US" style="background-color:initial">), in which her group studies the lived experiences of biometric AI, for example voice and face recognition technologies. In her a monograph, Existential Media: A Media Theory of the Limit Situation (forthcoming with OUP in March 2022) she introduces Karl Jaspers’ existential philosophy of limit situations for media theory, in the context of the increasing digitalization of death and automation of the lifeworld. She has recently published “Digital Limit Situations: Anticipatory Media Beyond ‘the New AI Era,’” Journal of Digital Social Research, 2:3, 2020. For more information about activities and key publications and output in the field of existential media studies, see: </span><a href="https://eur02.safelinks.protection.outlook.com/?url=https://www.im.uu.se/research/hub-for-digtal-existence/output-and-publications-in-existential-media-studies/&amp;data=04%7c01%7cRebecka.bukovinszky%40varldskulturmuseerna.se%7c3cf54ffed04845a46d7d08d974f82002%7cdbff2cb5008d4cff9ebd992326f4a621%7c0%7c0%7c637669431310882044%7cUnknown%7cTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7c1000&amp;sdata=6UiAPaohSiuO7ufPm7ACofpTYzFBgxkSkPQJShXmVHE%3D&amp;reserved=0"><span lang="EN-US">https://www.im.uu.se/research/hub-for-digtal-existence/output-and-publications-in-existential-media-studies/</span></a></p> <p><span lang="EN-US"> </span></p> <p><span lang="EN-US"><strong>Title of the Presentation: </strong></span><span style="background-color:initial">“</span><span style="background-color:initial">Body Stakes: Beyond the ‘Ethical Turn’ – Toward An Existential Ethics of Care in Living with Automation”</span></p> <p><span lang="EN-US"> </span></p> <p><span lang="EN-US">This talk discusses the key existential stakes of implementing biometrics in human lifeworlds. It introduces an existential ethics of care – through a conversation between existentialism, virtue ethics and a feminist ethics of care – that sides with and never leaves the vulnerable human body, while recognizing human diversity and the plurality of lived experience of technology. This implies moving beyond the “ethical turn,” by revisiting basic questions about what it means to be human, mortal and embodied, in order to safeguard existential needs and necessities in an age of increased automation. Recognizing that biometrics has beneficial affordances, the key argument is nevertheless that it implicates humans through unprecedented forms of objectification, through which the existential body – the frail, finite, concrete and unique human being – is at stake. Zooming in on three sites where this is currently manifest, the presentation will stress that an existential ethics of care is importantly not a solutionist list of principles or suggestions, but a way of thinking about the ethical challenges of living with biometrics in today’s world. The focus on basic existential stakes within human lived experience, should serve as the foundation on which comprehensive frameworks can be built to address the complexities and prospects for ethical machines, responsible biometrics and AI.</span></p> <p><br /></p> <p><span>———————— </span></p> <p><span><strong>Name</strong></span><span>: <a href="https://bplank.github.io/">Barbara Plank</a><br /></span><span style="background-color:initial"><strong>Title</strong></span><span style="background-color:initial">: Professor<br /></span><span style="background-color:initial"><strong>Affiliation</strong></span><span style="background-color:initial">: IT University of Copenhagen, Denmark</span></p> <p><span> </span></p> <p><span><strong>Short Bio</strong></span><span><strong>:</strong> Barbara Plank is Professor in the Computer Science Department at ITU (IT University of Copenhagen). She is also the Head of the Master in Data Science Program. She received her PhD in Computational Linguistics from the University of Groningen. Her research interests focus on Natural Language Processing, in particular  transfer learning and adaptation, learning from beyond the text, and in general learning under limited supervision and fortuitous data sources. She (co)-organised several workshops and international conferences, amongst which the PEOPLES workshop (since 2016) and the first European NLP Summit (EurNLP 2019). Barbara was general chair of the 22nd Northern Computational Linguistics conference (NoDaLiDa 2019) and workshop chair for ACL in 2019. Barbara is member of the advisory board of the European Association for Computational Linguistics (EACL) and vice-president of the Northern European Association for Language Technology (NEALT).</span></p> <p><span> <br /></span><span style="background-color:initial"><strong>Title of the Presentation (preliminary)</strong></span><span style="background-color:initial"><strong>:</strong> “Ethics, AI, Technology and Society: Perspectives from Natural Language Processing”</span></p> <p><span> </span></p> <p><span>Abstract</span><span>: TBA</span></p> <p><span>——————————</span></p> <p><span><strong>Name</strong></span><span>: <a href="https://www.cs.rice.edu/~vardi/">Moshe Vardi</a><br /></span><span style="background-color:initial"><strong>Title</strong></span><span style="background-color:initial">: George Distinguished Service Professor<br /></span><span style="background-color:initial"><strong>Affiliation</strong></span><span style="background-color:initial">: Department of Computer Science, Rice University (USA)</span></p> <p><span> </span></p> <p><span><strong>Short Bio</strong></span><span><strong>:</strong> Moshe Y. Vardi is a University Professor, and the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University.  He is the author and co-author of over 650 papers, as well as two books.  He is the recipient of several scientific awards, is a fellow of several societies and a member of several honorary academies. He holds seven honorary doctorates. He is a Senior Editor of Communications of the ACM, the premier publication in computing, focusing on societal impact of information technology.</span></p> <p><span> </span></p> <p><span><strong>Title of the Presentation (preliminary)</strong></span><span><strong>:</strong> “Ethics Washing in AI”</span></p> <p><span> </span></p> <p><span><strong>Abstract</strong></span><span>: Over the past decade Artificial Intelligence, in general, and Machine Learning, in particular, have made impressive advancements, in image recognition, game playing, natural-language understanding and more.  But there were also several instances where we saw the harm that these technologies can cause when they are deployed too hastily. In response to that, there has been much recent talk of AI ethics . But talk is cheap. &quot;Ethics washing&quot; — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. I will argue that the ethical lense is too narrow. The real issue is how to deal with technology's impact on society. Technology is driving the future, but who is doing the steering?</span></p> <p><span> </span></p> <p><span>—————————— </span></p> <p><span> </span></p> <p><span><strong>MODERATOR: </strong></span></p> <strong> </strong><p><span> </span></p> <p><span><strong>Name</strong></span><span>: <a href="http://www.math.chalmers.se/~olleh/">Olle Häggström</a><br /></span><span style="background-color:initial"><strong>Affiliation</strong></span><span style="background-color:initial">: Department of Mathematical Sciences, Chalmers (Sweden)<br /></span><span style="background-color:initial"><strong><br /></strong></span></p> <p><span style="background-color:initial"><strong>Short Bio</strong></span><span style="background-color:initial">: Olle Häggström is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences. The bulk of his research qualifications are in probability theory, but in recent years he has shifted focus towards existential risk and AI safety. He has worked on AI policy at both the national and the EU level, as well as in the World Economic Forum. </span></p></div>https://www.chalmers.se/en/areas-of-advance/materials/Calendar/Pages/Partnership-workshop-–-good-examples-of-collaborations-between-Chalmers-and-RISE.aspxhttps://www.chalmers.se/en/areas-of-advance/materials/Calendar/Pages/Partnership-workshop-%E2%80%93-good-examples-of-collaborations-between-Chalmers-and-RISE.aspxPartnership workshop – good examples of collaborations between Chalmers and RISE<p>Chalmers Kårhus</p><p>​​SAVE THE DATE: On December 2, Chalmers and RISE​ invite you to a workshop with good examples of research collaborations. The purpose is to inspire and show collaboration opportunities of working together, and how these values ​​have been achieved. The primary target group is researchers at Chalmers and RISE. More information and a program will follow.</p>https://www.chalmers.se/en/areas-of-advance/production/calendar/Pages/initiativ-seminar-2022-resilient-production.aspxhttps://www.chalmers.se/en/areas-of-advance/production/calendar/Pages/initiativ-seminar-2022-resilient-production.aspxProduction – Initiativ seminar 2022<p>RunAn, conference hall, Chalmersplatsen 1, Kårhuset</p><p>​Focus on resilient production. Stay tuned, more info to come!</p>​​