We had a successful two-day seminar on artificial intelligence at Lindholmen conference centre, Gothenburg. Starting with a celebration kickoff for Chalmers AI Research Centre (CHAIR)
. Thank you all who participated!
We will publish speaker slides here, as we receive them from the speakers:
Photos (above): Johan Bodell
Invited keynote speakers
Samuel Kaski, Director, Finnish Center for Artificial Intelligence FCAI
Title: Understandable data-efficient AI
For solving real-world problems we need AIs that not only work on massive data but on the amount of data available. And they need to be able to work with humans, augmenting them. For this they need to be data-efficient and understandable, and trustworthiness would not hurt either. I will discuss these central goals and the vision of Finnish Center for Artificial Intelligence FCAI for achieving them.
Bio: Samuel Kaski is a Professor of Computer Science, at Aalto University, Finland. He is the Director of the Finnish Center for Artificial Intelligence FCAI.
Samuel Kaskis research focus on probabilistic machine learning, meaning probabilistic modelling and Bayesian inference, applied to difficult problems that are interesting and societally important. His work includes the inter-related topics of analysis of multiple data sources, human-in-the-loop machine learning, simulator-based inference (likelihood-free inference with ABC), and privacy-preserving learning.
Mounia Lalmas Roelleke, Research Director at Spotify and Honorary Professor at University College London
Music recommendations (research) at Spotify
aim of the Personalization mission at Spotify is “to match fans and
artists in a personal and relevant way”. In this talk, I will describe
some of the (research) work to achieve this, from using machine learning
and AI to metric validation and evaluation methodology. I will describe
works done in the context of Home and Search.
Bio: Mounia Lalmas is a Director of Research at Spotify, and the Head of Tech Research in User Engagement, where she leads an interdisciplinary team of research scientists working on personalization and discovery. Mounia also holds an honorary professorship at University College London. Before that, she was a Director of Research at Yahoo, where she led a team of researchers working on advertising quality for Gemini, Yahoo native advertising platform. She also worked with various teams at Yahoo on topics related to user engagement in the context of news, search, and user generated content. Prior to this, she held a Microsoft Research/RAEng Research Chair at the School of Computing Science, University of Glasgow. Before that, she was Professor of Information Retrieval at the Department of Computer Science at Queen Mary, University of London. Her work focuses on studying user engagement in areas such as native advertising, digital media, social media, search, and now music. She has given numerous talks and tutorials on these and related topics. She is regularly a senior programme committee member at conferences such as WSDM, WWW and SIGIR. She was co-programme chair for SIGIR 2015 and WWW 2018. She is also the co-author of a book written as the outcome of her WWW 2013 tutorial on "measuring user engagement""
Robin Teigland, Professor of Strategy, Management of Digitalization, Chalmers University of Technology
Title: Into the future with Artificial Intelligence: Opportunities and Challenges
AI offers endless opportunities for organizations to create value and reinvent themselves. Yet while executives believe that AI will enable their companies to obtain or sustain a competitive advantage, the vast majority have yet to extensively incorporate AI in their offerings or processes, much less have an AI strategy in place. This talk will discuss the digital transformation that AI is underlying in society along with some of the opportunities and challenges associated with this transformation.
Bio: Dr. Robin Teigland is Professor of Management of Digitalization at Chalmers University of Technology as well as Professor of Strategy and Digitalization at the Stockholm School of Economics. Robin has more than 20 years of research experience within social networks, strategy, innovation, entrepreneurship, and startup ecosystems. In particular she is interested in how the convergence of disruptive technologies influences value creation in society as well as disrupts long-standing institutional structures. In 2017 and 2018 she was listed by the Swedish business magazine, Veckans Affärer, as one of Sweden’s most influential women, primarily in technology. In her freetime, Robin loves to play with her five kids as well as surf at her favorite beach in Peniche, Portugal.
Terry Regier, Professor, Department of Linguistics, Cognitive Science Program, UC Berkeley
Title: Semantic representations in humans and machines
Human-like semantic representations are an important goal for AI. A natural approach to this goal is to draw insight from the world's languages: if we can identify core computational principles that underlie human semantic systems, we may then be able to apply those principles in machines. I will argue that human lexicons are informed by a simple and pervasive principle of communicative efficiency. I will present a general computational framework that instantiates this principle, and will show how that framework accounts for data from many languages. I will close by discussing prospects for applying these insights in machines.
Terry Regier is Professor of Linguistics and Cognitive Science and Director of the Cognitive Science Program at the University of California, Berekley.
He investigates the relation of language and cognition, through computational methods, behavioral experiments, and cross-language semantic data, to understand why semantic categories vary across languages in the ways they do, and what that cross-language variation reveals about the mind and about communication.
Andreas Geiger, Professor Computer Science, University of Tübingen and heading the Max Planck Research Group on Autonomous Vision
Title: Computer Vision and the AI Revolution
Understanding the fundamental and mathematical principles behind visual perception and the possibility of building a machine that perceives and acts like humans do has been a long standing goal in vision. While our community has come a long way since the early attempts in the 1950s, and while research has accelerated significantly with the recent AI hype, many problems remain unsolved. In my talk, I will first briefly outline the early history of computer vision. Next, I will discuss how machine learning entered the field and address one of the prime issues in the age of deep learning: the thirst for data. I will also present several possible strategies to address this problem, illustrated by recent research on 3D reconstruction, motion estimation, object recognition and shape parsing. Finally, I will argue that considering computer vision as an isolated problem is problematic, and that future vision research enabling autonomous agents such as self-driving cars must be embodied.
Bio: Andreas Geiger is a professor of computer science heading the Autonomous Vision Group (AVG). The research group is part of the University of Tübingen and the Max Planck Institute for Intelligent Systems. His research is in computer vision and machine learning with a focus on 3D scene understanding, parsing, reconstruction, material and motion estimation for autonomous intelligent systems such as self-driving cars or household robots. In particular, his group investigates how complex prior knowledge can be incorporated into computer vision algorithms for making them robust to variations in a complex 3D world.
Vijay Chandru, Co-founder of the diagnostic company Strand and Professor at the Indian Institute of Science
Title: The Unreasonable Effectiveness of Machine Learning in the Sciences of the Artificial
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is the title of an article published in 1960 by the physicist Eugene Wigner and has been often used to justify work in abstract mathematics. In this talk, the speaker will examine an analogous statement about the claims being made about the ubiquitous use of machine learning and its application in solving decision problems in man-made systems. This will take us on a journey of the history of the decision sciences, the power of machine learning and the challenges of backing a one trick pony for problem solving. The talk will also present the interplay between artificial intelligence, intelligence augmentation and intelligence infrastructure in the context of personalized and precision medicine and other social impact domains.
Bio: Vijay Chandru had his formal training in Electrical Engineering (BITS, Pilani), in Systems Science and Engineering (UCLA) and in Decision Sciences (MIT). Building on this foundation, he has had over 35 years of experience straddling various geographies, academic environments and industries. His academic career in teaching and research in computational mathematics was substantially at Purdue University (1982-92) and the Indian Institute of Science (IISc) since 1992. Vijay is currently on the faculty of interdisciplinary research at the Indian Institute of Science. As a technology entrepreneur he was one of the inventors of the handheld computer Simputer® and currently serves as the Founder Director of Strand Life Sciences, India’s leading precision medicine company, both spinoffs from IISc. He is an elected fellow of both the Indian Academy of Sciences and the Indian National Academy of Engineering.
Virginia Dignum, Professor of Social and Ethical Artificial Intelligence, University of Umeå
Title: Responsible Artificial Intelligence
Artificial Intelligence (AI) systems are increasingly making decisions
that directly affect users and society, many questions raise across
social, economic, political, technological, legal, ethical and
philosophical issues. Can machines make moral decisions? Should
artificial systems ever be treated as ethical entities? What are the
legal and ethical consequences of human enhancement technologies, or
cyber-genetic technologies? How should moral, societal and legal values
be part of the design process? In this talk, we look at ways to ensure
ethical behaviour by artificial systems. Given that ethics are dependent
on the socio-cultural context and are often only implicit in
deliberation processes, methodologies are needed to elicit the values
held by designers and stakeholders, and to make these explicit leading
to better understanding and trust on artificial autonomous systems. We
will in particular focus on the ART principles for AI: Accountability,
Virginia Dignum is Professor of Social and
Ethical Artificial Intelligence at University of Umeå in Sweden and
associated with the Delft University of Technology in the Netherlands.
Her research focuses on the ethical and societal impact of AI. She is a
Fellow of the European Artificial Intelligence Association (EURAI), a
member of the European Commission High Level Expert Group on Artificial
Intelligence, and of the Executive Committee of the IEEE Initiative on
Ethics of Autonomous Systems. In 2006, she received the prestigious Veni
grant from NWO (Dutch Organization for Scientific Research) for her
work on computational agent-based organizational frameworks. She is a
well-known speaker on the social and ethical impact of AI, has published
extensively on the topic, and is member of program committees of most
major journals and conferences in AI.
, Co-founder of AI Sustainability Center, Sweden
Title: AI Sustainability - identifying, measuring and governing how AI is scaled in a broader context
a world of rapid data-driven technological advancements, society is
connected and transforming in ways previously unimaginable, bringing
instant benefits to all areas of life; how we work, live and play. At
the same time, the growing use of personal data and AI systems are
posing ethical risks that are difficult to predict and understand. Even
with GDPR as a starting point, the regulatory frameworks will continue
to struggle to keep up. Unintended effects—the misuse of data that can
lead to privacy intrusion, data and algorithm biases that can result in
discrimination, fake news and synthetic media—cannot be solved by any
one company or government alone. New types of multi-stakeholder
partnerships, multi disciplinary research and dialogue are needed to
more effectively address the societal challenges of a digitalized world.
Bio: Elaine Weidman-Grunewald is a co-founder of the AI Sustainability Center, a Nordic approach to responsible and purpose driven business, focusing on the impact of future technologies on people and society.
Formerly she was SVP and Chief Sustainability and Public Affairs Officer at Ericsson, and a member of the Executive Team, where among other things she pioneered the concept of Technology for Good, and created some of the most impactful partnerships in this field.
She is on the Board of Sweco AB, an environmental architecture and consulting firm that designs cities of the future. She has been actively engaged in shaping global policy around the role of technology in addressing sustainability challenges, including at the World Economic Forum, the Broadband Commission for Sustainable Development, the UN Sustainable Development Solutions Network, and the Business and Sustainable Development Commission. She is also a corporate development and sustainability adviser to start-ups, companies, and CEOs, and is a member of the International Women’s Forum. She is a Board member of the Whitaker Peace and Development Initiative, which focuses on the importance of youth empowerment, technology and peace building.
She is a frequent speaker at conferences including the World Economic Forum, SXSW, the United Nations, and Mobile World Congress. She holds a double Master’s degree from Boston University’s Center for Energy and Environmental Studies.