Speaker: Dr. Peter J. Barclay, Edinburgh Napier University.
Organised by: CHAIR theme Interpretable AI.
Overview
- Date:Starts 27 November 2025, 14:00Ends 27 November 2025, 15:00
- Location:TBA, Campus Johanneberg
- Language:English
- Last sign up date:19 November 2025
Registration will open soon.
Abstract
While Large Language models promise many benefits, they have considerable potential for misuse. While this has attracted considerable attention in areas such as manipulation of media to political ends, other areas have been insufficiently investigated. In this seminar I discuss two areas of interest, based on recent research at Edinburgh Napier University:
(1) How material generated in good faith by LLMs can propagate and amplify social biases. For example, we found that text translated between languages can introduce gender assumptions, and automatically generated images can propagate gender and racial stereotypes, despite the introduction of guardrails in widely used LLMs;
(2) The use of LLMs to create "fake fiction", an area where there is little research despite the threat to the livelihood of authors and other creative workers. While some "sham books" are of obviously poor quality, it is possible to generate text in the style of genre fiction which is difficult for humans to identify, and such books have been misrepresented for sale as human-written. We have had good success in identifying such texts using machine learning.
Bio
Dr. Peter J Barclay is a Lecturer in Computing in the School of Computing, Engineering, and the Built Environment at Edinburgh Napier University in Scotland, where he leads the postgraduate programme in Data Engineering.
He holds a degree in mathematical sciences from Edinburgh University, and a PhD in Computing from Edinburgh Napier university. He has published more than 60 refereed research papers in well-established international journals, conferences, and books.
From 2002 to 2016, he worked in Industry as a Software Architect, Vice President for Technology, and Director of Product Development. In 2016 he returned Edinburgh Napier, where he teaches database technology, data science, software development and web programming. He has taught invited courses in Romania, Switzerland, France, China, and Norway.

Interpretable AI
Interpretable AI is an emerging field, focused on developing AI systems that are transparent and understandable to humans.