Events: Centre CHAIRhttp://www.chalmers.se/sv/om-chalmers/kalendariumUpcoming events at Chalmers University of TechnologyFri, 26 Nov 2021 10:49:07 +0100http://www.chalmers.se/sv/om-chalmers/kalendariumhttps://www.chalmers.se/en/centres/chair/events/Pages/Guest-lecture-–-Prof--Mehul-Bhatt.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/Guest-lecture-%E2%80%93-Prof--Mehul-Bhatt.aspxGuest lecture – Prof. Mehul Bhatt<p>MB, lecture hall, Hörsalsvägen 5, Gamla M-huset</p><p>​Guest lecture with Prof. Mehul Bhatt from Örebro University. The lecture on cognitive vision addresses computational vision and perception at the interface of language, logic, cognition, and artificial intelligence. ​</p>​ <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p><span>The lecture emphasises application areas where the processing and explainable semantic interpretation of (potentially large volumes of) dynamic visuospatial imagery is central, e.g., for commonsense scene understanding; visual cognition for cognitive robotics / HRI, autonomous driving; narrative interpretation from the viewpoints of visuoauditory perception &amp; digital media design, semantic interpretation of multimodal human-behavioural data. </span></p> <p><span style="background-color:initial">Th</span><span style="background-color:initial">e lecture will highlight Deep (Visuospatial) Semantics, denoting the existence of systematically formalised declarative AI methods –e.g., per- taining to reasoning about space and motion– supporting semantic (visual) question-answering, relational learning, non-monotonic (visuospa- tial) abduction, and simulation of embodied interaction. The lecture demonstrates the integration of methods from knowledge representation and computer vision with a focus on (combining) reasoning &amp; learning about space, action, motion, and (multimodal) interaction. This is pre- sented in the backdrop of areas as diverse as autonomous driving, cognitive robotics, eye-tracking driven visual perception research (e.g., for visual art, architecture design, cognitive media studies), and psychology &amp; behavioural research domains where data-centred analytical methods are gaining momentum. The lecture covers both applications and basic methods concerned with topics such as: explainable visual perception, semantic video understanding, language generation from video, declarative spatial reasoning, and computational models of narrative. The lec- ture will position an emerging line of research that brings together a novel &amp; unique combination of research methodologies, academics, and communities encompassing AI, ML, Vision, Cognitive Linguistics, Psychology, Visual Perception, and Spatial Cognition and Computation.</span><br /></p></div></div></div> <div><span><br /></span></div> <div><span><strong>About the speaker: </strong></span><span>Mehul Bhatt is Professor within the School of Science and Technology at Orebro University (Sweden), and a Guest Professor at the University of Skövde (Sweden). His basic research focusses on formal, cognitive, and computational foundations for AI technologies with a principal emphasis on knowledge representation, semantics, integration of commonsense reasoning &amp; learning, explainability, and spatial representation and reasoning. Mehul Bhatt steers CoDesign Lab (</span><span>www.codesign-lab.org</span><span>), an initiative aimed at addressing the confluence of Cognition, Artificial Intelligence, Interaction, and Design Science for the development of human-centred cognitive assistive technologies and interaction systems. Since 2014, he directs the research and consulting group DesignSpace (</span><span>www.design-space.org</span><span>) and pursues ongoing research in Cognitive Vision (www.codesign-lab.org/cognitive-vision) and Spatial Reasoning (</span><span><a href="http://hcc.uni-bremen.de/spatial-reasoning/">www.spatial-reasoning.com</a></span><span>).</span></div> <div><span><br /></span></div> <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p><span>Mehul Bhatt obtained a bachelors in economics (India), masters in information technology (Australia), and a PhD in computer science (Aus- tralia). He has been a recipient of an Alexander von Humboldt Fellowship, a German Academic Exchange Service award (DAAD), and an Australian Post-graduate Award (APA). He was the University of Bremen nominee for the German Research Foundation (DFG) Award: Heinz Maier-Leibnitz-Preis 2014. Prior to moving to Sweden, Mehul Bhatt was Professor at the University of Bremen (Germany). Further details are available via: </span><span><a href="https://mehulbhatt.org/">www.mehulbhatt.org</a>.</span></p> <p><br /></p> </div> </div> </div>https://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Jannice-Käll.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Jannice-K%C3%A4ll.aspxAI Ethics with Jannice Käll<p>KA, lecture hall, Kemigården 4, Kemi</p><p>​AI Ethics seminar with Jannice Käll, senior Lecturer in Sociology of Law at Lund University.</p>​<br />7 December Tuesday 13:15-14:15<br /><br /><div><strong>Title:</strong> AI, law, and e​thics: from ethics washing to ethics bashing, towards another form of ethics via posthumanist theory.</div> <div><br /></div> <div><p><span lang="EN-US">This seminar will address the role of ethics in the discussion surrounding legislative efforts regarding AI. As such, ethics have surfaced widely in discussions on how to mitigate negative effects regarding AI. Critical interventions have however shown that there is a risk that ethics is used as a concept to merely wash off more fundamental questions regarding distributive justice. These critical interventions can however easily turn into a one-sided form of ethics bashing, which risks diminish the potential of reconsidering the role that AI may play in creating a more just society. For this reason it will be suggested that a more nuanced idea of both ethics and law can be explored via posthumanist critical theory.<span> </span></span></p> <p><span lang="EN-US"><strong>Jannice Käll </strong>is L.LD. in Legal Theory and Senior Lecturer in Sociology of Law at Lund University. Her research concerns the digitalization of law, and the commodification of digital life-worlds.</span></p></div>https://www.chalmers.se/en/centres/chair/events/Pages/Chalmers-AI-Research-Center-–-A(I)vancez!.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/Chalmers-AI-Research-Center-%E2%80%93-A(I)vancez!.aspxChalmers AI Research Center – Zooming out!<p>Scaniasalen, Kårhuset</p><p>​Director of CHAIR invites you to an AI Afternoon! ​December 8 11:30 -15:00 + AI Talk Screening 15:00-16:00​</p><div><br />December 8, 11:30-15:00 + AI Talk screening 15:00-16:00<br /></div> <a href="https://ui.ungpd.com/Surveys/621dee3f-7e67-4143-8a07-f430d5e384ab" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" /></a><a href="https://ui.ungpd.com/Surveys/621dee3f-7e67-4143-8a07-f430d5e384ab" target="_blank"><div style="display:inline !important">Register here</div></a>*<br /><p><span lang="EN-US">The number of participants is limited and therefore registration is needed. Lunch wraps are served ahead of the meeting. Please state your preferences in the registration form.<span> </span></span></p> <div><span lang="EN-US" style="font-size:11pt"><span class="Apple-converted-space"><br /></span></span></div> <span style="background-color:initial">A few years have passed since the launch of CHAIR, and during that time AI and its applications in science and society has continued to develop at an incredible pace. </span><span style="background-color:initial">With the opportunity to meet again in person, we would like to invite you to an afternoon with CHAIR where we look at the current state of AI, exciting research, different perspectives on AI within Chalmers, and the future of CHAIR.</span><div><span style="background-color:initial"><br /></span></div> <div><span style="background-color:initial">As the field of AI changes so will we, and we would like to hear all your ideas and perspectives on the future of AI and our centre. Take the opportunity to shape the future of CHAIR, and hope to see you on December 8!</span><div><span style="background-color:initial"><br />Welcome,</span><div><p>Daniel Gillblad, Director CHAIR</p></div></div></div>https://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Cynthia-Rudin.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Cynthia-Rudin.aspxAI Talks with Cynthia Rudin<p>Zoom</p><p></p>​<span style="background-color:initial">December 8</span><span style="background-color:initial;font-family:&quot;open sans&quot;">, 2021, 3:00-4:00 pm (Swedish time)</span><div><div style="font-family:&quot;open sans&quot;">Online, Zoom</div> <div style="font-family:&quot;open sans&quot;"><br /></div> <div><span style="font-family:&quot;open sans&quot;;font-size:16px;font-weight:600;background-color:initial">Title: I</span><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><span style="background-color:initial"></span><span style="background-color:initial">nterpretable Machine Learning</span></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><br /></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="background-color:initial"><span style="font-weight:700"><a href="https://ui.ungpd.com/Surveys/0649eac3-12ec-4d41-a640-d20a7d4e82f7">Register by subscribing to our mailing list here.​</a></span></span></span></font></div> <div style="font-family:&quot;open sans&quot;"><span style="font-size:16px;font-weight:600;background-color:initial"><br /></span></div></div> <span style="font-family:&quot;open sans&quot;"></span><div><div style="font-family:&quot;open sans&quot;"></div> <div style="font-family:&quot;open sans&quot;;font-size:16px"></div> <p class="chalmersElement-P"><span style="font-family:&quot;open sans&quot;"><span style="font-weight:700">Abstract:</span></span><span style="font-family:&quot;open sans&quot;"> </span><font face="open sans">With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, racially-biased models in healthcare, and inexplicable loan decisions in finance. Transparency and interpretability of machine learning models is critical in high stakes decisions. However, there are clear reasons why organizations might use black box models instead: it is easier to profit from inexplicable predictive models than transparent models, and it is actually much easier to construct complicated models than interpretable models. Most importantly, there is a widely-held belief that that more accurate models must be more complicated, and more complicated models cannot possibly be understood by humans. Both parts of this last argument, however, are lacking in scientific evidence and are often not true in practice. There are many cases in which interpretable models are just as accurate as their black box counterparts on the same dataset, as long as one is willing to search carefully for such models. </font></p> <p class="chalmersElement-P"><font face="open sans"><br /></font></p> <p class="chalmersElement-P"><font face="open sans">In her talk, Dr. Rudin will discuss the interesting phenomenon that interpretable machine learning models are often as accurate as their black box counterparts, giving examples of such cases encountered throughout her career. One example she will discuss is predicting manhole fires and explosions in New York City, working with the power company. This was the project that ultimately drew Dr. Rudin to the topic of interpretable machine learning. This project was extremely difficult due to the complexity of the data, and interpretability was essential to her team’s ability to troubleshoot the model. In a second example, she will discuss how interpretable machine learning models can be used for extremely high stakes decisions, such as caring for critically ill patients in intensive care units of hospitals. Here, interpretable machine learning is used to predict seizures in patients being monitored with continuous electroencephalogram monitoring (cEEG). In a third example, she will discuss predicting criminal recidivism, touching upon the scandal surrounding the use of a black box model in the U.S. justice system, questioning whether we truly need such a model at all.</font></p></div> <div><h2 class="chalmersElement-H2" style="font-family:&quot;open sans&quot;;font-size:16px"><a href="https://youtu.be/1cap241NOkg"></a></h2> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;;font-size:16px"><br /></div> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;"><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px">About the speaker</span><br /></div> <div><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px"><img src="/SiteCollectionImages/Centrum/CHAIR/events/AI_Talks_CynthiaRudin.png" class="chalmersPosition-FloatLeft" alt="" style="margin-top:5px;margin-bottom:5px;margin-left:10px;height:232px;width:180px" /></span><font face="open sans">Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics &amp; bioinformatics at Duke University, and directs the Interpretable Machine Learning Lab (formerly the Prediction Analysis Lab). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.</font></div></div> https://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Karl-de-Fine-Licht.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Karl-de-Fine-Licht.aspxAI Ethics with Karl de Fine Licht<p>HC2, lecture hall, Hörsalsvägen 14, Hörsalar HC</p><p>​AI Ethics seminar with Karl de Fine Licht, Senior lecturer in Ethics and Technology at Chalmers University of Technology.</p>​<br /><span>14 December Tuesday 13:15-14:15 (Swedish time)</span><p><span lang="EN-US"><span style="background-color:initial"><strong>Title:</strong> Artif</span><span style="background-color:initial">i</span><span style="background-color:initial">cial intelligence in public decision-making: on how trans</span><span style="background-color:initial">parency can and cannot be used to foster legitimacy </span></span></p> <span></span><p><span lang="EN-US">Transparency has been a hot topic in recent years when it comes to developing- and implementing artificial intelligence (AI). This is especially so when it comes to the use of AI in public decision-making, such as when AI is used to determine whether someone should get a loan at the bank, or whether they should get social security benefits at the social services. One of the main ideas expressed in the debate about AI in public decision-making is that these processes need to be open somehow to be legitimate, and that the opaqueness of AI systems (as opposed to human ones) poses specific challenges in this regard. In this talk, de Fine Licht will discuss how we can (and cannot) use transparency to increase legitimacy when using AI in public decision-making.</span></p> <p><span lang="EN-US"> </span></p> <p><span lang="EN-US"></span></p> <p><span lang="EN-US"><strong>Karl de Fine Licht</strong> has a PhD in Practical Philosophy and is Senior Lecturer in Ethics and Technology at Chalmers University of Technology. He specializes in – among other things – AI  and public decision-making.</span></p>https://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Susan-Murphy.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Susan-Murphy.aspxAI Talks with Susan Murphy<p>Zoom</p><p></p><span style="background-color:initial">January 19</span><span style="background-color:initial;font-family:&quot;open sans&quot;">, 2022, 3:00-4:00 pm (Swedish time)</span><div><div style="font-family:&quot;open sans&quot;">Online, Zoom</div> <div style="font-family:&quot;open sans&quot;"><br /></div> <div><span style="font-family:&quot;open sans&quot;;font-size:16px;font-weight:600;background-color:initial">Title: T.B.D.</span><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><span style="background-color:initial"></span></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><br /></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="background-color:initial"><span style="font-weight:700"><a href="https://ui.ungpd.com/Surveys/0649eac3-12ec-4d41-a640-d20a7d4e82f7">Register by subscribing to our mailing list here.​</a></span></span></span></font></div> <div style="font-family:&quot;open sans&quot;"><span style="font-size:16px;font-weight:600;background-color:initial"><br /></span></div></div> <span style="font-family:&quot;open sans&quot;"></span><div><div style="font-family:&quot;open sans&quot;"></div> <div style="font-family:&quot;open sans&quot;;font-size:16px"></div> <p class="chalmersElement-P"><span style="font-family:&quot;open sans&quot;"><span style="font-weight:700">Abstract:</span></span><span style="font-family:&quot;open sans&quot;"> </span><font face="open sans">T.B.D.</font></p></div> <div><h2 class="chalmersElement-H2" style="font-family:&quot;open sans&quot;;font-size:16px"><a href="https://youtu.be/1cap241NOkg"></a></h2> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;;font-size:16px"><br /></div> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;"><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px">About the speaker</span><br /></div> <div><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px"><img src="/SiteCollectionImages/Centrum/CHAIR/events/susan-murphy.jpeg" class="chalmersPosition-FloatLeft" alt="" style="margin-top:5px;margin-bottom:5px;margin-left:10px;height:232px;width:180px" /></span></div></div> <span style="font-family:&quot;open sans&quot;;background-color:initial"></span>Susan A. Murphy is a Radcliffe Alumnae Professor at Harvard Radcliffe Institute and a professor of statistics and computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences. She leads the Statistical Reinforcement Learning Lab, working on the development of data analytic algorithms and methods for informing sequential decision making in health. In particular for (1) constructing individualized sequences of treatments (a.k.a., adaptive interventions) for use in informing clinical decision making and (2) constructing real time individualized sequences of treatments (a.k.a., Just-in-Time Adaptive Interventions) delivered by mobile devices. For her work on trial designs and analytics, Dr. Murphy was awarded a McArthur Fellowship in 2013, in 2014 she was elected a member of the National Academy of Medicine and in 2016 she was elected a member of the National Academy of Sciences of the US National Academies.