Existential risk to humanity is the third research programme hosted by GoCAS. It is a two month crossdisciplinary thematic programme, led by Anders Sandberg, Research Fellow at the Future of Humanity Institute, Oxford, and with Olle Häggström as local host. The aim is to explore the pathways to existential risk to see how they can be managed.
The opening of the programme was on September 4, when Anders Sandberg gave an overview of the issues. First, Mattias Goksör, Pro-Vice-Chancellor of the University of Gothenburg declared that the University was proud to host this programme with scientists that may not have met otherwise. The theme seemed to him to be the most important of any conference that he had attended, and since all catastrophe movies he had seen involved a lonesome scientist that nobody wanted to listen to he was happy to see that so many were here. Stefan Bengtsson, President of Chalmers University of Technology, also welcomed Anders and was glad to see that the programme had brought together scientists from a broad aspect of subjects to discuss important topics. The risks of technology development are something that a university of technology must reflect upon, and he looked forward to what this workshop would bring.
Anders began his talk Existential risk: how threatened is humanity? by mentioning a journalist who asked how he could be so cheerful about his subject. He pointed out that he is trying to reduce the risks to humanity, not to bring them on. Nuclear war, that was very hot in the 1980s, has almost been forgotten until recent days. The asteroid Florence recently passing by the Earth is a good reminder of risks that emanate from nature. Can we do something useful about these risks?
Historically, humanity could have been wiped out at certain points. Homo sapiens is one of several species where the others have become extinct, and the fact that our descendants seem to come from one small population suggests that a large part of humanity at some time was destroyed, maybe from the aftermath of a volcano eruption. In recent time, we were very close to a nuclear war in 1983 if not the Soviet officer Stanislav Petrov had decided that he did not believe what was indicated to be a nuclear attack, and which turned out to be a software error.
The risk categories can be placed in a chart with the axes scope, ranging from personal to pan-generational, and severity, ranging from imperceptible to crushing or terminal, with the existential risks in the upper right corner. Anders listed possible existential risks emanating from nature: astronomical (where the risk is not very great at moment, though solar flares can heavily disrupt our power grids), natural hazards (where we are at no great risk at a species level), climate changes not caused by humans (as ice ages, which is a slow moving problem), natural pandemic risks (where on one hand a virus today can spread in an instant, but on the other we are healthier and more resistant that before).
He then continued with anthropogenic risks, that is those emanating from human activities. Some people have asked why he does not work with the risk of climate change, and the answer is that so many people already are, while he and his colleagues work with the neglected problems. Among these can be found war (where the real problem would be a nuclear winter where it is almost impossible to do agriculture, and the potentiality of totalitarian states has increased), physics experiments (which risk assessments are hard to make real watertight), emerging technologies (where the benefits must be balanced against negative consequences), geoengineering failures, biotechnology (where the ability to do damage is increasing, which we need to handle), nanotechnology and artificial intelligence.
Many of the risks are systemic and can lead to synchronous failures, as the fluctuation in food and energy systems which now have been connected. In complex cases like this even defining the probability of disaster is hard, since it is an unprecedented event that will only happen once and for idiosyncratic reasons. The natural existential risks are not large today, but some of the anthropogenic risks we can do something about now, but maybe not later.
So, what are the solutions? Anders said that information is valuable, to better focus on relevant hazards, to set priorities better, and to do better methodology. Biases can be reduced, and risks can be detected and prevented through monitoring systems, prevention systems and new intervention systems. We can avoid creating more risks through relinquishments and moratoria, pursuing technology more carefully, change order of technology arrival and we can increase resilience, for example through more resources, trust, diversity and spread.
Anders concluded by asking if we can mature and survive as a species and reminded us that enormous values are at stake. We do not want to get into the global catastrophic risk corner, and we do have a moral responsibility for the future of coming generations. To deal with a risk of existential proportions, it is important to be optimistic, as he himself who sees a bright future worth fighting for and do believe we can do something to reduce the threats to it.
Text and photos: Setta Aspström