September 7-8, 2017
The workshop is part of the guest researcher program with the same title, but is open to all interested. Prereregistration is however required; write to Olle Häggström (firstname.lastname@example.org) no later than August 16, 2017.
Place: lecture hall Palmstedtsalen, Chalmers Conference Centre, address Chalmersplatsen 1.
Confirmed speakers include:
Stuart Armstrong, Future of Humanity Institute, University of Oxford: Practical methods to make safe AI
Seth Baum, Global Catastrophic Risk Institute: In search of the biggest risk reduction opportunities
David Denkenberger, Tennessee State University: Cost of non-sunlight dependent food for agricultural catastrophes
Katja Grace, Machine Intelligence Research Institute: Empirical evidence on the future of AI
Robin Hanson, George Mason University: Disasters in the Age of Em and After
Thore Husfeldt, IT University of Copenhagen and Lund University: Plausibility and utility of apocalyptic AI scenarios
Karim Jebari, Institute for Futures Studies, Stockholm: Resetting the tape of history
Karin Kuhlemann, University College London: Complexity, creeping normalcy, and conceit: Why certain catastrophic risks are sexier than others
James Miller, Smith College: Hints from the Fermi paradox for surviving existential risks
Catherine Rhodes, Centre for the Study of Existential Risk, University of Cambridge: International governance of existential risk
Anders Sandberg, Future of Humanity Institute, University of Oxford: Tipping points, uncertainty and systemic risks: what to do when the whole is worse than its parts?
Phil Torres, X-Risks Institute: Agential risks: Implications for existential risk reduction
Roman Yampolskiy, University of Louisville: Artificial intelligence as an existential risk to humanity