Events: Matematiska vetenskaper events at Chalmers University of TechnologyFri, 22 Oct 2021 11:51:54 +0200 seminar<p>TBA</p><p>​Hossein Raufi: Experiences from active teaching</p>​<br />The seminar is held in Swedish. Genom åren har jag experimenterat en del med olika sätt att få studenterna mer aktiva på lektioner. På seminariet tänkte jag berätta om några av dessa erfarenheter. Jag är också mycket intresserad av att höra från andra om deras erfarenheter och tankar kring aktiv undervisning. in Algebraic Geometry and Number Theory<p>Euler, Skeppsgränd 3, och online</p><p>​Tobias Magnusson, Chalmers/GU: Numerical Evaluation of Holomorphic Eichler Integrals via Generalized Second Order Modular Forms</p>​<br />Abstract: Holomorphic Eichler integrals occur as the simplest case of iterated Eichler-Shimura integrals. Their numeric values appear in the experimental study of path integrals associated with Feynman diagrams. In this talk, we describe how to efficiently evaluate holomorphic Eichler integrals. We express them as linear combinations of products of generalized second order Eisenstein series, whose evaluation is a significantly simpler task. Generalized second order modular forms have their origins in earlier work by Mertens-Raum, Chinta-Diamantis-O'Sullivan, and Goldfeld. This project is joint work with Albin Ahlbäck and Martin Raum. Ethics with Dr Beth Singler<p>Online Zoom</p><p>​AI Ethics Online with Dr Beth Singler, Junior Researcher Fellow in Artificial Intelligence at Homerton College, University of Cambridge.</p><div><br /></div> 26 October 13:15–14:15<div><a href="" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here</a><br />​<div><span><strong>Title:</strong> The Dreams our Stuff is Made of: Trust, Agency, and Super-agency.<br /><br /></span></div> <div><p><span lang="EN-GB">Drawing on ethnographic field work on the popular discourse around artificial intelligence, this talk will explore some of the implications of our imaginaries of AI. These imaginaries shape our relationship with these advances in technology, and impact the decisions we make about their influence on society, culture, and justice. This talk will provide case studies of particular ‘tension points’ around ideas of trust, agency, and even super-agency, and argue for the role of both public engagement and education in the larger ‘AI ethics’ debate.<br /><br /></span></p> <span><strong>Abstract: </strong><strong>Dr Beth Singler </strong>is the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. Prior to this she was the post-doctoral Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. She has been an associate fellow at the Leverhulme Centre for the Future of Intelligence since 2016. Beth explores the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of her public engagement work she has produced </span><a href="" target="_blank" title=""><span>a series of short documentaries</span></a><span>, and the first, Pain in the Machine, won the 2017 AHRC Best Research Film of the Year Award.</span></div></div> <div><span><br /></span></div> <div><span><img src="/SiteCollectionImages/Centrum/CHAIR/events/Beth__Singler.jpg" alt="" style="margin:5px;width:140px;height:96px" /><br /><br /><br /></span></div> <div><div>Most AI Ethics-seminars are available on <a href="" target="_blank">Youtube​</a>.</div> <span></span></div> on Research Xiaobo Qu<p>Online, Zoom</p><p>​In this talk, a few case studies will be presented with regard to the applications of AI in transportation engineering. It will begin with a brief introduction to the discipline of transportation engineering - origin, progression and future trend. Subsequently the speaker will talk about how AI can reshape transportation engineering research and practice. The case studies include trajectory planning of connected and automated vehicles, pricing of shared mobility, traffic state estimation and flow prediction, and behavioral choice models.</p><div><br /></div> <div>October 29, 13:00 (Swedish time)</div> <div><a href="" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here​</a></div> <div><br /></div> <strong>Title</strong>:​ AI and Transportation Engineering: Case Studies, trends and some thoughts<div><div><span><br /></span></div> <div><span>In this talk, a few case studies will be presented with regard to the applications of AI in transportation engineering. It will begin with a brief introduction to the discipline of transportation engineering - origin, progression and future trend. Subsequently the speaker will talk about how AI can reshape transportation engineering research and practice. The case studies include trajectory planning of connected and automated vehicles, pricing of shared mobility, traffic state estimation and flow prediction, and behavioral choice models.</span></div> <div><br /></div> <div><span><span class="Apple-converted-space"><div><strong>Xiaobo Qu</strong> is a Full Professor with a Chair in the Department of Architecture and Civil Engineering, Chalmers University of Technology in Sweden. His research is focused on improving large, complex and interrelated urban mobility systems by integrating with emerging technologies. More specifically, his research has been applied to the improvement of emergency services, and operations of electric vehicles and connected automated vehicles. He has authored or co-authored over 120 journal articles published at top tier journals, including 14 ESI highly cited papers. Before his current appointment, he was a professor (with tenure) at Chalmers (2018-2019), and a senior lecturer/lecturer (permanent positions, 2012-2017) in two Australian universities. He is an elected Member of Academia Europaea–the Academy of Europe since Aug 2020, and an elected Fellow of the European Academy of Sciences since Jan 2020. </div></span></span></div></div>ætra.aspx Ethics with Henrik Skaug Sætra<p>Online, Zoom</p><p>​AI Ethics Online with Henrik Skaug Sætra, associate professor at the Faculty of Computer Science, Engineering and Economics at Østfold University College.</p><div><br /></div> ​2 November 13:15–14:15 (Swedish time)<div><a href="" target="_blank"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Register here<br />​</a><div><strong>Title:</strong> <span>Robotomorphy – becoming our creations?</span></div> <div><span><br /></span></div> <div><p><span lang="EN-US">In this talk I discuss how robots and AI<span> </span></span>tell<span> </span><span lang="EN-US">a</span><span> </span>story of<span> </span><span lang="EN-US">how</span><span> </span>we<span> </span><span lang="EN-US">humans<span> </span></span>perceive ourselves,<span> </span><span lang="EN-US">and<span> </span></span>how the<span lang="EN-US">se technologies in turn</span><span> </span>also change us. Robotomorphy describes what occurs when we project the characteristics and capabilities of robots onto ourselves, to make sense of the complicated and mysterious beings that we are.<span> </span><span lang="EN-US">This may be inevitable, but also potentially unfortunate, because w</span>hen robots become the blueprint for humanity, they simultaneously become benchmarks and ideals to live up to<span lang="EN-US">.<span> </span></span></p> <div><span lang="EN-US"><span class="Apple-converted-space"><br /></span></span></div> <span></span></div> <p class="MsoNormal" style="margin:0cm;font-size:11pt;font-family:calibri, sans-serif"><span lang="EN-US"></span></p> <div><strong>​</strong><span lang="EN-GB"><strong>Henrik Skaug Sætra</strong></span><span lang="EN-GB"><span> </span>is an associate professor at the Faculty of Computer Science, Engineering and Economics at Østfold University College. He is a political scientist with a broad and interdisciplinary approach to issues of ethics and the individual and societal implications of technology, environmental ethics, and game theory. Sætra has in recent years worked extensively on the effects of technology on liberty and autonomy and on various issues related to the use of social robots.</span></div> <span></span><p class="MsoNormal" style="margin:0cm;font-size:11pt;font-family:calibri, sans-serif"><span lang="EN-GB"></span></p></div> and Applied Mathematics (CAM) seminar<p>MV:L14 and online</p><p>​Daniel Peterseim, Universität Augsburg: Energy-adaptive Riemannian Optimization on the Stiefel Manifold</p><p>​<br />Abstract: This talk addresses the numerical simulation of nonlinear eigenvector problems such as the Gross-Pitaevskii and Kohn-Sham equation arising in computational physics and chemistry. These problems characterize critical points of energy minimization problems on the infinite-dimensional Stiefel manifold. To efficiently compute minimizers we propose a novel Riemannian gradient descent method induced by an energy-adaptive metric. Quantified convergence of the method is established under suitable assumptions on the underlying problem. A non-monotone line search and the inexact inexact evaluation of Riemannian gradients substantially improve the overall efficiency of the method. Numerical experiments illustrate the performance of the method and demonstrates its competitiveness with well-established  schemes.</p> <p>This is joint work with Robert Altmann (U Augsburg) and Tatjana Stykel (U Augsburg).</p> and Applied Mathematics (CAM) seminar<p>MV:L14 and online</p><p>​Stefan Horst Sommer, University of Copenhagen: Stochastic shape analysis and probabilistic geometric statistics</p>​<br />Abstract: Analysis and statistics of shape variation can be formulated in geometric settings with geodesics modelling transitions between shapes. The talk will concern extensions of these smooth geodesic models to account for noise and uncertainty: Stochastic shape processes and stochastic shape matching algorithms. In the stochastic setting, matching algorithms take the form of bridge simulation schemes which also provide approximations of the transition density of the stochastic shape processes. The talk will cover examples of stochastic shape processes and connected bridge simulation algorithms. I will connect these ideas to geometric statistics, the statistical analysis of general manifold valued data, particularly to the diffusion mean Sustainable: Can automated fact checkers clean up the mess?<p>Studenternas Hus, Götabergsgatan 17, Göteborg</p><p>​Five days dedicated to sustainable development! The Act Sustainable week is soon up and running, starting 15th of November. Chalmers, represented by the Information and Communications Area of Advance, invites you to a morning session with focus on automated fact-checking.</p>​<div><div>The dream of free dissemination of knowledge seems to be stranded in a swamp of tangled truth. Fake news proliferates. Digital echo chambers confirm biases. It even seems hard to agree upon basic facts. Is there hope in the battle to clean up this mess? Yes! Within the research area of information and communications technology, we try to find ways through software solutions.</div> <div><br /></div> <div>In this morning session, you will meet with two invited researchers, both developing automated fact-checking methods. The talks are followed up with a panel discussion, bringing a broader perspective on the problem. The panelists are guests from Chalmers and the University of Gothenburg, together with the keynote speakers.</div> <div><br /></div></div> <div><b><br /></b></div> <div> <div><b>Agenda:</b></div> <div>09:45 <b>Introduction</b> by <b>Erik Ström</b>, Director, Information and Communications Technology Area of Advance</div> <div>10:00 <b>Looking for the truth in the post-truth era</b> with <b>Ivan Koychev</b>, University of Sofia, Bulgaria</div> <div>10:30 <b>Computational Fact-Checking for Textual Claims</b> with <b>Paolo Papotti</b>, Associate Professor, EURECOM, France</div> <div>11:00 <b>Pause</b></div> <div>11:10 <b>Panel discussion</b>, moderator <b>Graham Kemp</b>, professor, Department of Computer Science and Engineering, Chalmers and researchers from ChalmersUniversity of Technology and the University of Gothenburg.</div> <div>12:00 <b>The end</b></div> <div><br /></div> <div><a href="" target="_blank" title="link to Act Sustainable website"><img class="ms-asset-icon ms-rtePosition-4" src="/_layouts/images/icgen.gif" alt="" />Read more and register here</a>​</div> <div><br /></div> <div><br /></div> <div><br /></div></div> seminar<p>TBA</p><p>​TBA</p>​​<br />The seminar is held in Swedish. for prospective Master's students<p>Online</p><p>​Information about Master’s programmes in Physics, Mathematics and Complex Adaptive Systems</p><p>​<br />Get your Master's degree in Gothenburg, a city with one of the largest research communities in Physics and Mathematics in Sweden. During this webinar, you will learn more about our Master’s programmes in Physics, Mathematics and Complex Adaptive Systems.</p> <p>Join our programme coordinators and students in an information session followed by a live Q&amp;A. Feel free to ask your questions about the studies, student life or how to find housing in Gothenburg for example.</p>–-good-examples-of-collaborations-between-Chalmers-and-RISE.aspx workshop – good examples of collaborations between Chalmers and RISE<p>Lecture Hall Palmstedt, university building, Chalmersplatsen 2, Campus Johanneberg</p><p>​​SAVE THE DATE: On December 2, Chalmers and RISE​ invite you to a workshop with good examples of research collaborations. The purpose is to inspire and show collaboration opportunities of working together, and how these values ​​have been achieved. The primary target group is researchers at Chalmers and RISE. More information and a program will follow on 20 October.​​</p>​Time and place: 2 December, 13:00 -17:00, Palmstedtsalen Chalmers kårhus, Chalmersplatsen 2, Campus Johanneberg.​ Talks with Cynthia Rudin<p>Zoom</p><p></p>​<span style="background-color:initial">December 8</span><span style="background-color:initial;font-family:&quot;open sans&quot;">, 2021, 3:00-4:00 pm (Swedish time)</span><div><div style="font-family:&quot;open sans&quot;">Online, Zoom</div> <div style="font-family:&quot;open sans&quot;"><br /></div> <div><span style="font-family:&quot;open sans&quot;;font-size:16px;font-weight:600;background-color:initial">Title: I</span><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><span style="background-color:initial"></span><span style="background-color:initial">nterpretable Machine Learning</span></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="font-weight:700"><br /></span></span></font></div> <div><font face="open sans"><span style="font-size:16px"><span style="background-color:initial"><span style="font-weight:700"><a href="">Register by subscribing to our mailing list here.​</a></span></span></span></font></div> <div style="font-family:&quot;open sans&quot;"><span style="font-size:16px;font-weight:600;background-color:initial"><br /></span></div></div> <span style="font-family:&quot;open sans&quot;"></span><div><div style="font-family:&quot;open sans&quot;"></div> <div style="font-family:&quot;open sans&quot;;font-size:16px"></div> <p class="chalmersElement-P"><span style="font-family:&quot;open sans&quot;"><span style="font-weight:700">Abstract:</span></span><span style="font-family:&quot;open sans&quot;"> </span><font face="open sans">With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, racially-biased models in healthcare, and inexplicable loan decisions in finance. Transparency and interpretability of machine learning models is critical in high stakes decisions. However, there are clear reasons why organizations might use black box models instead: it is easier to profit from inexplicable predictive models than transparent models, and it is actually much easier to construct complicated models than interpretable models. Most importantly, there is a widely-held belief that that more accurate models must be more complicated, and more complicated models cannot possibly be understood by humans. Both parts of this last argument, however, are lacking in scientific evidence and are often not true in practice. There are many cases in which interpretable models are just as accurate as their black box counterparts on the same dataset, as long as one is willing to search carefully for such models. </font></p> <p class="chalmersElement-P"><font face="open sans"><br /></font></p> <p class="chalmersElement-P"><font face="open sans">In her talk, Dr. Rudin will discuss the interesting phenomenon that interpretable machine learning models are often as accurate as their black box counterparts, giving examples of such cases encountered throughout her career. One example she will discuss is predicting manhole fires and explosions in New York City, working with the power company. This was the project that ultimately drew Dr. Rudin to the topic of interpretable machine learning. This project was extremely difficult due to the complexity of the data, and interpretability was essential to her team’s ability to troubleshoot the model. In a second example, she will discuss how interpretable machine learning models can be used for extremely high stakes decisions, such as caring for critically ill patients in intensive care units of hospitals. Here, interpretable machine learning is used to predict seizures in patients being monitored with continuous electroencephalogram monitoring (cEEG). In a third example, she will discuss predicting criminal recidivism, touching upon the scandal surrounding the use of a black box model in the U.S. justice system, questioning whether we truly need such a model at all.</font></p></div> <div><h2 class="chalmersElement-H2" style="font-family:&quot;open sans&quot;;font-size:16px"><a href=""></a></h2> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;;font-size:16px"><br /></div> <span style="font-family:&quot;open sans&quot;;font-size:16px"></span><div style="font-family:&quot;open sans&quot;"><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px">About the speaker</span><br /></div> <div><span style="font-family:&quot;open sans&quot;, sans-serif;font-size:20px"><img src="/SiteCollectionImages/Centrum/CHAIR/events/AI_Talks_CynthiaRudin.png" class="chalmersPosition-FloatLeft" alt="" style="margin-top:5px;margin-bottom:5px;margin-left:10px;height:232px;width:180px" /></span><font face="open sans">Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics &amp; bioinformatics at Duke University, and directs the Interpretable Machine Learning Lab (formerly the Prediction Analysis Lab). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015, and was named by as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.</font></div></div> seminar<p>TBA</p><p>​Johanna Pejlare: TBA</p>​<br />​The seminar is held in Swedish.<br />