Earlier this year, a Google engineer named Blake Lemoine sparked widespread discussion about AI sentience, when he publicly claimed that a large language model called LaMDA is sentient and deserves moral consideration. Lemoine's alleged evidence and arguments for his claims were widely (and rightly) criticized.
In this talk, I discuss what sorts of arguments and evidence actually should guide our reasoning about sentience in AI systems, using current large language models like LaMDA as a case study. I also argue that questions about AI sentience are not mere distractions from more immediate problems in AI ethics, but questions that any responsible approach to AI development must grapple with. I discuss the risks of both under- and over-attributing sentience to AI systems - risks that are likely to increase as the behavior of AI systems grows more and more sophisticated.
recently completed a PhD in philosophy at New York University. He works on topics in ethics and philosophy of mind related to artificial intelligence. He is currently a research fellow at the Future of Humanity Institute at Oxford University.Register here to receive the link to the seminar
Online, register to receive the link
27 September, 2022, 13:15
27 September, 2022, 14:15