AI Ethics with Robert Long

​How to think about alleged sentience in current AI systems
Photo of Robert Long
Earlier this year, a Google engineer named Blake Lemoine sparked widespread discussion about AI sentience, when he publicly claimed that a large language model called LaMDA is sentient and deserves moral consideration. Lemoine's alleged evidence and arguments for his claims were widely (and rightly) criticized.

In this talk, I discuss what sorts of arguments and evidence actually should guide our reasoning about sentience in AI systems, using current large language models like LaMDA as a case study. I also argue that questions about AI sentience are not mere distractions from more immediate problems in AI ethics, but questions that any responsible approach to AI development must grapple with. I discuss the risks of both under- and over-attributing sentience to AI systems - risks that are likely to increase as the behavior of AI systems grows more and more sophisticated.

Robert Long recently completed a PhD in philosophy at New York University. He works on topics in ethics and philosophy of mind related to artificial intelligence. He is currently a research fellow at the Future of Humanity Institute at Oxford University.

Register here to receive the link to the seminar
Category Seminar
Location: Online, register to receive the link
Starts: 27 September, 2022, 13:15
Ends: 27 September, 2022, 14:15

Page manager Published: Thu 01 Sep 2022.