• Chris Percy: Computational Functionalism, Philosophy, and the Future of AI Consciousness
    Jan 19 2026

    Chris Percy is Director of the CoSentience Initiative and lead researcher on a grant-funded project investigating artificial consciousness. He has authored academic papers on consciousness published in leading academic journals. His applied AI research includes a patent in machine learning and publications at NeurIPS workshops, and the European Conference on Artificial Intelligence. He holds visiting research affiliations with the Universities of Warwick and Derby in the UK and the Qualia Research Institute in the US.

    In this episode, Chris outlines his team's research programme and argues that we should take the possibility of artificial consciousness seriously whilst remaining humble about our current understanding.

    Show More Show Less
    53 mins
  • Cameron Berg: Why Do LLMs Report Subjective Experience?
    Dec 8 2025

    Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness.

    In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss:

    • New experimental evidence where LLMs report "vivid and alien" subjective experiences when engaging in self-referential processing
    • Mechanistic interpretability findings showing that suppressing "deception" features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hear
    • Why Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experience
    • The "convergent evidence" strategy, including findings that models report internal dissonance and frustration when facing logical paradoxes
    • The existential implications of "mind crime" and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial suffering

    Cameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.



    Show More Show Less
    58 mins
  • Lucius Caviola: A Future with Digital Minds? Expert Estimates and Societal Response
    Nov 19 2025

    Lucius Caviola is an Assistant Professor in the Social Science of AI at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, and a Research Associate in Psychology at Harvard University. His research explores how the potential arrival of conscious AI will reshape our social and moral norms. In today's interview, Lucius examines the psychological and social factors that will determine whether this transition unfolds well, or ends in moral catastrophe. He discusses:

    • Why experts estimate a 50% chance that conscious digital minds will emerge by 2050
    • The "takeoff" scenario where digital minds could outnumber humans in welfare capacity within a decade
    • How "biological chauvinism" leads people to deny consciousness even in perfect whole-brain emulations
    • The dual risks of "under-attribution" (unwittingly creating mass suffering) and "over-attribution" (sacrificing human values for unfeeling code)
    • Surprising findings that people refuse to "harm" AI in economic games even when they explicitly believe the AI isn't conscious

    Lucius argues that rigorous social science and forecasting are essential tools for navigating these risks, moving beyond intuition to prevent us from accidentally creating vast populations of digital beings capable of suffering, or failing to recognise consciousness where it exists.

    Show More Show Less
    1 hr and 12 mins
  • Lenore Blum: AI Consciousness is Inevitable: The Conscious Turing Machine
    Nov 3 2025

    *Lenore refers to a few slides in this podcast; you can see them here.

    Intro

    Today's guest, distinguished mathematician and computer scientist Lenore Blum, explains why she and her husband Manuel believe machine consciousness isn't just possible, it's inevitable. Their reasoning? If consciousness is computational (and they're betting it is), and we can mathematically specify those computations, then we can build them. It's that simple, and that profound.

    In this conversation, host Will Millership and Callum Chace discuss with Lenore:

    • How the Conscious Turing Machine (CTM) draws from and extends the foundational ideas of Alan Turing's Universal Turing Machine.
    • Using mathematics to "extract and simplify" the complexities of consciousness, searching for the fundamental, formal principles that define it.
    • How the CTM acts as a high-level framework that aligns with the functionalities of competing theories like Global Workspace Theory and Integrated Information Theory (IIT).
    • Why the Blums believe that AI consciousness is "inevitable" and that this provides a functional "roadmap for a conscious AI".
    • The ethical implications of machine suffering, and why the phenomenon of "pain asymbolia" suggests a conscious AI must be able* *to suffer in order to function.
    • What lessons Alan Turing's original "imitation game" can offer us for creating a practical, real-world test for machine consciousness.

    Lenore's Work (links)

    • Blum, L., & Blum,M. (2024). AI Consciousness is Inevitable: A Theoretical Computer Science Perspective. arXiv. https://arxiv.org/pdf/2403.17101
    • Blum, L., & Blum, M. (2022). A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine. PNAS, 119(21). https://doi.org/10.1073/pnas.21159341
    • Closer to Truth, Blums’ Conscious Turing Machine
    • Full list of references here.
    Show More Show Less
    1 hr and 43 mins
  • Clara Colombatto: Perceptions of Consciousness, Intelligence, and Trust in Large Language Models
    Oct 13 2025

    Clara is an Assistant Professor in the Department of Psychology at the University of Waterloo in Canada, where she directs the Vision and Cognition Lab.

    Her lab is investigating various aspects of perception and cognition, with a particular focus on the perception of other minds and the visual roots of social cognition.

    The lab is also exploring how we can perceive not just others’ perceptual and cognitive states, but also their metacognitive states such as awareness, confidence, or uncertainty — and how such impressions facilitate communication and collaboration.

    Useful links:

    • Clara Colombatto personal website.
    • Vision and Cognition Lab website.
    • Folk psychological attributions of consciousness to large language models. Article.
    • Illusions of Confidence in Artificial Systems. Article.
    • PRISM website.
    Show More Show Less
    47 mins
  • Keith Frankish: Illusionism and Its Implications for Conscious AI
    Sep 10 2025

    Keith is an Honorary Professor in the Philosophy Department at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme at the University of Crete.

    Keith is best known for his theory of illusionism—the view that phenomenal consciousness, or the subjective feeling of experience, is an illusion. Rather than denying that we have conscious experiences, Keith argues that our intuitive conception of them as inherently mysterious or non-physical is mistaken.

    Show More Show Less
    1 hr and 6 mins
  • Mark Solms: Engineering Consciousness – Can Robots "Give a Damn?"
    Aug 7 2025

    In this episode, we ask: if we wanted to construct a conscious mind from scratch, what would we need? That is the question our guest, Professor Mark Solms, addressed in the final chapter of his book The Hidden Spring - a Journey to the Source of Consciousness.

    Mark is a Professor in Neuropsychology at the University of Cape Town, and is president of the South African Psychoanalytical Association. He is also an advisor to PRISM and Conscium.

    Mark has contributed significantly to our understanding of consciousness through his pioneering research in the field of neuropsychoanalysis, which integrates Freudian theory with findings from contemporary neuroscience.

    Show More Show Less
    1 hr and 2 mins
  • Jeff Sebo: AI Sentience, Welfare and Moral Status
    Jul 14 2025

    In this episode, we spoke to Jeff Sebo of New York University. Jeff is the author of the recently published book The Moral Circle: Who Matters, What Matters and Why.

    In it, he challenges us to expand our moral concern beyond the boundaries of species and substrate. He has also co-authored a number of papers arguing that AI welfare is an issue that needs to be taken seriously today.

    Links:

    • The Moral Circle: Who Matters, What Matters and Why. Book.
    • Jeff Sebo personal website.
    • Moral consideration for AI systems by 2030. (Paper).
    • Is there a tension between AI safety and AI welfare?. (Paper)
    • Prism website.
    Show More Show Less
    46 mins