We analyze the expected behavior of an advanced artificial agent with a learned goal planning in an unknown environment. Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal.
For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that. Then we argue that this ambiguity will lead it to intervene in whatever protocol we set up to provide data for the agent about its goal.
We discuss an analogous failure mode of approximate solutions to assistance games, and review some recent approaches that may avoid this problem.
This is joint work with Marcus Hutter and Michael Osborne.
Michael Cohen is studying for a DPhil in Engineering Science at the University of Oxford. His research considers the expected behavior of generally intelligent artificial agents, and he is interested in designing agents that we can expect to behave safely.
Register here to receive the link to the seminar
Seminar; Public lecture
Online, register to receive the link
11 October, 2022, 13:15
11 October, 2022, 14:15