Exploring Machine Consciousness cover art

Exploring Machine Consciousness

Exploring Machine Consciousness

Written by: PRISM
Listen for free

About this listen

A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness. Visit www.prism-global.com for more about our work.

© 2025 Exploring Machine Consciousness
Science
Episodes
  • Chris Percy: Computational Functionalism, Philosophy, and the Future of AI Consciousness
    Jan 19 2026

    Chris Percy is Director of the CoSentience Initiative and lead researcher on a grant-funded project investigating artificial consciousness. He has authored academic papers on consciousness published in leading academic journals. His applied AI research includes a patent in machine learning and publications at NeurIPS workshops, and the European Conference on Artificial Intelligence. He holds visiting research affiliations with the Universities of Warwick and Derby in the UK and the Qualia Research Institute in the US.

    In this episode, Chris outlines his team's research programme and argues that we should take the possibility of artificial consciousness seriously whilst remaining humble about our current understanding.

    Show More Show Less
    53 mins
  • Cameron Berg: Why Do LLMs Report Subjective Experience?
    Dec 8 2025

    Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness.

    In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss:

    • New experimental evidence where LLMs report "vivid and alien" subjective experiences when engaging in self-referential processing
    • Mechanistic interpretability findings showing that suppressing "deception" features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hear
    • Why Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experience
    • The "convergent evidence" strategy, including findings that models report internal dissonance and frustration when facing logical paradoxes
    • The existential implications of "mind crime" and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial suffering

    Cameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.



    Show More Show Less
    58 mins
  • Lucius Caviola: A Future with Digital Minds? Expert Estimates and Societal Response
    Nov 19 2025

    Lucius Caviola is an Assistant Professor in the Social Science of AI at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, and a Research Associate in Psychology at Harvard University. His research explores how the potential arrival of conscious AI will reshape our social and moral norms. In today's interview, Lucius examines the psychological and social factors that will determine whether this transition unfolds well, or ends in moral catastrophe. He discusses:

    • Why experts estimate a 50% chance that conscious digital minds will emerge by 2050
    • The "takeoff" scenario where digital minds could outnumber humans in welfare capacity within a decade
    • How "biological chauvinism" leads people to deny consciousness even in perfect whole-brain emulations
    • The dual risks of "under-attribution" (unwittingly creating mass suffering) and "over-attribution" (sacrificing human values for unfeeling code)
    • Surprising findings that people refuse to "harm" AI in economic games even when they explicitly believe the AI isn't conscious

    Lucius argues that rigorous social science and forecasting are essential tools for navigating these risks, moving beyond intuition to prevent us from accidentally creating vast populations of digital beings capable of suffering, or failing to recognise consciousness where it exists.

    Show More Show Less
    1 hr and 12 mins
No reviews yet