To ensure that AI becomes a net positive for humanity we need to work out how to navigate the technology landscape in such a way as to reap its positive potential while avoiding the worst risks – this is the key motivation behind Olle Häggström’s interest in AI safety and AI governance. A popular summary of some of the reasons leading towards this motivation is offered in a talk he gave in September 2018 (in Swedish)
. Initially his work on AI safety was mainly on how handle the extreme scenario
in which an AI with superhuman general intelligence is created, but more recently he has shifted some of his attention towards more down-to-earth settings, such the social effects of robotisation, and risks from chatbots and other powerful tools for political propaganda, as well as from AI systems for autonomous weapons.