Events: Data- och informationsteknikhttp://www.chalmers.se/sv/om-chalmers/kalendariumUpcoming events at Chalmers University of TechnologyMon, 18 Oct 2021 10:49:33 +0200http://www.chalmers.se/sv/om-chalmers/kalendariumhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Martin-Danelljan.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Martin-Danelljan.aspxAI Talks with Martin Danelljan<p>Zoom</p><p></p><span style="background-color:initial">October 20</span><span style="background-color:initial;font-family:&quot;open sans&quot;">, 2021, 3:00-4:00 pm (Swedish time)</span><div><div style="font-family:&quot;open sans&quot;">Online, Zoom</div> <div style="font-family:&quot;open sans&quot;"><br /></div> <div><span style="font-family:&quot;open sans&quot;;font-size:16px;font-weight:600;background-color:initial">Title: </span><font face="open sans"><span style="font-size:16px"><b><span style="background-color:initial"></span><span style="background-color:initial">Deep Visual Reasoning with Optimization-based Network Modules</span></b></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><br /></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="background-color:initial"><span style="font-weight:700"><a href="https://ui.ungpd.com/Surveys/0649eac3-12ec-4d41-a640-d20a7d4e82f7">Register by subscribing to our mailing list here.​</a></span></span></span></font></div> <div style="font-family:&quot;open sans&quot;"><span style="font-size:16px;font-weight:600;background-color:initial"><br /></span></div></div> <span style="font-family:&quot;open sans&quot;"></span><div><div style="font-family:&quot;open sans&quot;"></div> <div style="font-family:&quot;open sans&quot;;font-size:16px"></div> <p class="chalmersElement-P"><span style="font-family:&quot;open sans&quot;"><span style="font-weight:700">Abstract:</span></span><span style="font-family:&quot;open sans&quot;"> </span><span style="background-color:initial"><font face="open sans">Deep learning approaches have achieved astonishing performance in numerous vision applications, including image classification, object detection, and semantic segmentation. While these problems are easily treated with standard feed-forward architectures, many computer vision problems require more complex reasoning about the information given during inference. In particular, more sophisticated autonomous agents need to be able to learn new concepts and abilities “on the fly”, given only limited data and supervision. However, developing effective end-to-end learnable methods for few-shot and online learning tasks have turned out to be a formidable challenge. </font></span></p> <p class="chalmersElement-P"><span style="background-color:initial"><font face="open sans"><br /></font></span></p> <p class="chalmersElement-P"><span style="background-color:initial"><font face="open sans">We tackle this challenge by designing deep network modules that internally optimize an objective. Since key problems in many computer vision tasks can be formulated as objective functions, optimization-based network modules are able to perform effective and efficient reasoning in such circumstances. By further learning the objective function itself, we obtain a general family of deep network modules, capable of more complex non-local reasoning. We will cover their application within a variety of tasks, including visual tracking, video object segmentation, few-shot segmentation, dense correspondence estimation, and multi-frame image restoration.</font></span></p></div> <div><span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;"><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px">About the speaker</span><br /></div> <div><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px"><img src="/SiteCollectionImages/Centrum/CHAIR/events/AI_Talks_Danelljan.jpg" class="chalmersPosition-FloatLeft" alt="" style="margin-top:5px;margin-bottom:5px;margin-left:10px;height:232px;width:180px" /></span></div></div> <span></span>Martin Danelljan is a senior researcher at ETH Zürich, Switzerland. He received his Ph.D. degree from Linköping University, Sweden in 2018. His Ph.D. thesis was awarded the biennial Best Nordic Thesis Prize at SCIA 2019. His main research interests are meta and online learning, deep probabilistic models, and conditional generative models. His research includes applications to visual tracking, video object segmentation, dense correspondence estimation, and super-resolution. His research in the field of visual tracking, in particular, has attracted much attention, achieving first rank in the 2014, 2016, and 2017 editions of the Visual Object Tracking (VOT) Challenge and the OpenCV State-of-the-Art Vision Challenge. He received the best paper award at ICPR 2016, the best student paper at BMVC 2019, and an outstanding reviewer award at ECCV 2020. He serves as a senior PC member for AAAI 2022 and an area chair for CVPR 2022. He is also a co-organizer of the VOT, NTIRE, and AIM workshops.<br />https://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Dr-Beth-Singler.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Dr-Beth-Singler.aspxAI Ethics with Dr Beth Singler<p>Online Zoom</p><p>​AI Ethics Online with Dr Beth Singler, Junior Researcher Fellow in Artificial Intelligence at Homerton College, University of Cambridge.</p><div><br /></div> 26 October 13:15–14:15<div><a href="https://ui.ungpd.com/Surveys/2e29e41d-49ac-455e-a0b5-98adf6a42cd4" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here</a><br />​<div><span><strong>Title:</strong> The Dreams our Stuff is Made of: Trust, Agency, and Super-agency.<br /><br /></span></div> <div><p><span lang="EN-GB">Drawing on ethnographic field work on the popular discourse around artificial intelligence, this talk will explore some of the implications of our imaginaries of AI. These imaginaries shape our relationship with these advances in technology, and impact the decisions we make about their influence on society, culture, and justice. This talk will provide case studies of particular ‘tension points’ around ideas of trust, agency, and even super-agency, and argue for the role of both public engagement and education in the larger ‘AI ethics’ debate.<br /><br /></span></p> <span><strong>Abstract: </strong><strong>Dr Beth Singler </strong>is the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. Prior to this she was the post-doctoral Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. She has been an associate fellow at the Leverhulme Centre for the Future of Intelligence since 2016. Beth explores the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of her public engagement work she has produced </span><a href="https://bvlsingler.com/rise-of-the-machines-short-films-on-ai-and-robotics-available-online/" target="_blank" title="https://bvlsingler.com/rise-of-the-machines-short-films-on-ai-and-robotics-available-online/"><span>a series of short documentaries</span></a><span>, and the first, Pain in the Machine, won the 2017 AHRC Best Research Film of the Year Award.</span></div></div> <div><span><br /></span></div> <div><span><img src="/SiteCollectionImages/Centrum/CHAIR/events/Beth__Singler.jpg" alt="" style="margin:5px;width:140px;height:96px" /><br /><br /><br /></span></div> <div><div>Most AI Ethics-seminars are available on <a href="https://www.youtube.com/channel/UC_4mfkM2YV94f-P4n81l-Bg%E2%80%8B" target="_blank">Youtube​</a>.</div> <span></span></div>https://www.chalmers.se/en/centres/chair/events/Pages/Spotlight-on-Research-Xiaobo-Qu.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/Spotlight-on-Research-Xiaobo-Qu.aspxSpotlight on Research Xiaobo Qu<p>Online, Zoom</p><p>​In this talk, a few case studies will be presented with regard to the applications of AI in transportation engineering. It will begin with a brief introduction to the discipline of transportation engineering - origin, progression and future trend. Subsequently the speaker will talk about how AI can reshape transportation engineering research and practice. The case studies include trajectory planning of connected and automated vehicles, pricing of shared mobility, traffic state estimation and flow prediction, and behavioral choice models.</p><div><br /></div> <div>October 29, 13:00 (Swedish time)</div> <div><a href="https://ui.ungpd.com/Surveys/35e5999c-a2db-41e7-8644-02a6c2f821e1" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here​</a></div> <div><br /></div> <strong>Title</strong>:​ AI and Transportation Engineering: Case Studies, trends and some thoughts<div><div><span><br /></span></div> <div><span>In this talk, a few case studies will be presented with regard to the applications of AI in transportation engineering. It will begin with a brief introduction to the discipline of transportation engineering - origin, progression and future trend. Subsequently the speaker will talk about how AI can reshape transportation engineering research and practice. The case studies include trajectory planning of connected and automated vehicles, pricing of shared mobility, traffic state estimation and flow prediction, and behavioral choice models.</span></div> <div><br /></div> <div><span><span class="Apple-converted-space"><div><strong>Xiaobo Qu</strong> is a Full Professor with a Chair in the Department of Architecture and Civil Engineering, Chalmers University of Technology in Sweden. His research is focused on improving large, complex and interrelated urban mobility systems by integrating with emerging technologies. More specifically, his research has been applied to the improvement of emergency services, and operations of electric vehicles and connected automated vehicles. He has authored or co-authored over 120 journal articles published at top tier journals, including 14 ESI highly cited papers. Before his current appointment, he was a professor (with tenure) at Chalmers (2018-2019), and a senior lecturer/lecturer (permanent positions, 2012-2017) in two Australian universities. He is an elected Member of Academia Europaea–the Academy of Europe since Aug 2020, and an elected Fellow of the European Academy of Sciences since Jan 2020. </div></span></span></div></div>https://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Henrik-Skaug-Sætra.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Ethics-with-Henrik-Skaug-S%C3%A6tra.aspxAI Ethics with Henrik Skaug Sætra<p>Online, Zoom</p><p>​AI Ethics Online with Henrik Skaug Sætra, associate professor at the Faculty of Computer Science, Engineering and Economics at Østfold University College.</p><div><br /></div> ​2 November 13:15–14:15 (Swedish time)<div><a href="https://ui.ungpd.com/Surveys/b0804ea5-4288-43d5-83ee-4f01980c5034" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here<br />​</a><div><strong>Title:</strong> <span>Robotomorphy – becoming our creations?</span></div> <div><span><br /></span></div> <div><p><span lang="EN-US">In this talk I discuss how robots and AI<span> </span></span>tell<span> </span><span lang="EN-US">a</span><span> </span>story of<span> </span><span lang="EN-US">how</span><span> </span>we<span> </span><span lang="EN-US">humans<span> </span></span>perceive ourselves,<span> </span><span lang="EN-US">and<span> </span></span>how the<span lang="EN-US">se technologies in turn</span><span> </span>also change us. Robotomorphy describes what occurs when we project the characteristics and capabilities of robots onto ourselves, to make sense of the complicated and mysterious beings that we are.<span> </span><span lang="EN-US">This may be inevitable, but also potentially unfortunate, because w</span>hen robots become the blueprint for humanity, they simultaneously become benchmarks and ideals to live up to<span lang="EN-US">.<span> </span></span></p> <div><span lang="EN-US"><span class="Apple-converted-space"><br /></span></span></div> <span></span></div> <p class="MsoNormal" style="margin:0cm;font-size:11pt;font-family:calibri, sans-serif"><span lang="EN-US"></span></p> <div><strong>​</strong><span lang="EN-GB"><strong>Henrik Skaug Sætra</strong></span><span lang="EN-GB"><span> </span>is an associate professor at the Faculty of Computer Science, Engineering and Economics at Østfold University College. He is a political scientist with a broad and interdisciplinary approach to issues of ethics and the individual and societal implications of technology, environmental ethics, and game theory. Sætra has in recent years worked extensively on the effects of technology on liberty and autonomy and on various issues related to the use of social robots.</span></div> <span></span><p class="MsoNormal" style="margin:0cm;font-size:11pt;font-family:calibri, sans-serif"><span lang="EN-GB"></span></p></div>https://www.chalmers.se/en/areas-of-advance/ict/calendar/Pages/Act-Sustainable-Can-automated-fact-checkers-clean-up-the-mess.aspxhttps://www.chalmers.se/en/areas-of-advance/ict/calendar/Pages/Act-Sustainable-Can-automated-fact-checkers-clean-up-the-mess.aspxAct Sustainable: Can automated fact checkers clean up the mess?<p>Studenternas Hus, Götabergsgatan 17, Göteborg</p><p>​Five days dedicated to sustainable development! The Act Sustainable week is soon up and running, starting 15th of November. Chalmers, represented by the Information and Communications Area of Advance, invites you to a morning session with focus on automated fact-checking.</p>​<div><span style="background-color:initial"><strong>The dream of free dissemination of knowledge seems to be stranded in a swamp of tangled truth. Fake news proliferates. Digital echo chambers confirm biases. Even basic facts seem hard to be agreed upon. Is there hope in the battle to clean up this mess? </strong></span><br /></div> <div><div>Yes! Within the research area of information and communications technology a lot of effort is made to find software solutions. </div> <div><br /></div> <div><b>Agenda:</b></div> <div>09:45 <b>Introduction</b> by <b>Erik Ström</b>, Director, Information and Communications Technology Area of Advance</div> <div>10:00 <b>Looking for the truth in the post-truth era</b> with <b>Ivan Koychev</b>, University of Sofia, Bulgaria</div> <div>10:30 <b>Computational Fact Checking for Textual Claims</b> with <b>Paolo Papotti</b>, Associate Professor, EURECOM, France</div> <div>11:00 <b>Pause</b></div> <div>11:10 <b>Panel discussion</b>, moderator <b>Graham Kemp</b>, professor, Department of Computer Science and Engineering, Chalmers together with researchers from ChalmersUniversity of Technology and University of Gothenburg.</div> <div>12:00 <b>The end</b></div> <div><br /></div> <div><a href="https://www.actsustainable.se/thursday21" target="_blank" title="link to Act Sustainable website"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Read more and register here</a>​</div> <div><br /></div> <div><br /></div> <div><br /></div></div>https://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Cynthia-Rudin.aspxhttps://www.chalmers.se/en/centres/chair/events/Pages/AI-Talks-Cynthia-Rudin.aspxAI Talks with Cynthia Rudin<p>Zoom</p><p></p>​<span style="background-color:initial">December 8</span><span style="background-color:initial;font-family:&quot;open sans&quot;">, 2021, 3:00-4:00 pm (Swedish time)</span><div><div style="font-family:&quot;open sans&quot;">Online, Zoom</div> <div style="font-family:&quot;open sans&quot;"><br /></div> <div><span style="font-family:&quot;open sans&quot;;font-size:16px;font-weight:600;background-color:initial">Title: I</span><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><span style="background-color:initial"></span><span style="background-color:initial">nterpretable Machine Learning</span></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><br /></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="background-color:initial"><span style="font-weight:700"><a href="https://ui.ungpd.com/Surveys/0649eac3-12ec-4d41-a640-d20a7d4e82f7">Register by subscribing to our mailing list here.​</a></span></span></span></font></div> <div style="font-family:&quot;open sans&quot;"><span style="font-size:16px;font-weight:600;background-color:initial"><br /></span></div></div> <span style="font-family:&quot;open sans&quot;"></span><div><div style="font-family:&quot;open sans&quot;"></div> <div style="font-family:&quot;open sans&quot;;font-size:16px"></div> <p class="chalmersElement-P"><span style="font-family:&quot;open sans&quot;"><span style="font-weight:700">Abstract:</span></span><span style="font-family:&quot;open sans&quot;"> </span><font face="open sans">With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, racially-biased models in healthcare, and inexplicable loan decisions in finance. Transparency and interpretability of machine learning models is critical in high stakes decisions. However, there are clear reasons why organizations might use black box models instead: it is easier to profit from inexplicable predictive models than transparent models, and it is actually much easier to construct complicated models than interpretable models. Most importantly, there is a widely-held belief that that more accurate models must be more complicated, and more complicated models cannot possibly be understood by humans. Both parts of this last argument, however, are lacking in scientific evidence and are often not true in practice. There are many cases in which interpretable models are just as accurate as their black box counterparts on the same dataset, as long as one is willing to search carefully for such models. </font></p> <p class="chalmersElement-P"><font face="open sans"><br /></font></p> <p class="chalmersElement-P"><font face="open sans">In her talk, Dr. Rudin will discuss the interesting phenomenon that interpretable machine learning models are often as accurate as their black box counterparts, giving examples of such cases encountered throughout her career. One example she will discuss is predicting manhole fires and explosions in New York City, working with the power company. This was the project that ultimately drew Dr. Rudin to the topic of interpretable machine learning. This project was extremely difficult due to the complexity of the data, and interpretability was essential to her team’s ability to troubleshoot the model. In a second example, she will discuss how interpretable machine learning models can be used for extremely high stakes decisions, such as caring for critically ill patients in intensive care units of hospitals. Here, interpretable machine learning is used to predict seizures in patients being monitored with continuous electroencephalogram monitoring (cEEG). In a third example, she will discuss predicting criminal recidivism, touching upon the scandal surrounding the use of a black box model in the U.S. justice system, questioning whether we truly need such a model at all.</font></p></div> <div><h2 class="chalmersElement-H2" style="font-family:&quot;open sans&quot;;font-size:16px"><a href="https://youtu.be/1cap241NOkg"></a></h2> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;;font-size:16px"><br /></div> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;"><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px">About the speaker</span><br /></div> <div><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px"><img src="/SiteCollectionImages/Centrum/CHAIR/events/AI_Talks_CynthiaRudin.png" class="chalmersPosition-FloatLeft" alt="" style="margin-top:5px;margin-bottom:5px;margin-left:10px;height:232px;width:180px" /></span><font face="open sans">Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics &amp; bioinformatics at Duke University, and directs the Interpretable Machine Learning Lab (formerly the Prediction Analysis Lab). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.</font></div></div>