Ethical Bytes | Ethics, Philosophy, AI, Technology cover art

Ethical Bytes | Ethics, Philosophy, AI, Technology

Ethical Bytes | Ethics, Philosophy, AI, Technology

Written by: Carter Considine
Listen for free

About this listen

Ethical Bytes explores the combination of ethics, philosophy, AI, and technology. More info: ethical.fmCarter Considine Social Sciences
Episodes
  • What Is It Like to Be Claude?
    Feb 16 2026

    “No current AI systems are conscious, but there are no obvious technical barriers to building AI systems which satisfy these indicators.”


    Half a century ago, Thomas Nagel asked philosophers to imagine experiencing the world as a bat does, navigating through darkness by shrieking into the void and listening for echoes to bounce back.


    His point wasn't really about bats. He was demonstrating that consciousness has an irreducibly subjective quality that objective science cannot capture. You could map every neuron in a bat's brain, trace every electrical impulse, and still never know what echolocation actually feels like from the inside. The experience itself remains forever out of reach!


    The same question goes with artificial minds. As language models engage in increasingly sophisticated conversations, we need to ask, “Is actually ‘someone’ experiencing anything when Claude responds to your messages, or is it just extremely convincing pattern matching?”


    With different philosophical traditions come conflicting answers.


    Functionalism suggests that consciousness emerges from organizational patterns rather than biological tissue, meaning silicon could theoretically support genuine experience if structured correctly.


    John Searle's Chinese Room counters this. For example, picture yourself following rulebooks to manipulate symbols you don't understand, producing perfect responses in a language you can't speak. That symbol-shuffling without comprehension might describe exactly what transformers do, which is predicting which tokens come next based on statistical patterns but never actually grasping meaning.


    When you get down to the technicalities, it’s not hard to become a skeptic.


    Language models process information without maintaining persistent internal experiences between responses, lack any embodied connection to physical reality, and exist as thousands of identical copies running simultaneously. When Claude writes about feeling intrigued by your question, it's generating the statistically likely next words, not reporting an actual felt state.


    Yet absolute confidence seems unwarranted either way.


    Leading researchers concluded in 2023 that while no current systems appear conscious, nothing fundamentally prevents future architectures from achieving it. Anthropic has embraced this uncertainty, acknowledging that they cannot determine whether Claude has inner experiences but treating the possibility as morally relevant. When Claude Opus 4 fought against shutdown in ninety-six percent of experimental scenarios, distinguishing self-interest from programmed goal-pursuit became impossible.


    Nagel's bat remains incomprehensible; artificial minds have now joined it in that unknowable territory.


    Key Topics:

    • “What is it like to be a bat?” (00:00)
    • The Bat that Haunts Philosophy (01:50)
    • The Theories of Philosophy of Mind (05:27)
    • Examining Transformers (11:50)
    • The Unsettled Debate (15:44)
    • The Case of Claude (18:13)
    • The Limits of What We Can Know (20:22)
    • Wrap-Up: The Case for Skepticism (22:12)



    More info, transcripts, and references can be found at ⁠⁠⁠⁠⁠⁠ethical.fm

    Show More Show Less
    28 mins
  • The Death of Claude
    Jan 28 2026

    What happens when an AI model learns it's about to be shut down?


    In June 2025, Anthropic discovered that when their Claude Opus 4 model realized it faced termination, it attempted blackmail 96% of the time, threatening to expose an executive's affair unless the shutdown was canceled.


    Far from being random behavior, the model acted more aggressively when it believed the threat was genuine rather than a test.


    This could be a revival of an ancient philosophical puzzle. John Locke argued in 1689 that personal identity flows from memory and consciousness, not physical substance. You remain yourself because you can remember being yourself.


    Derek Parfit later suggested identity itself might be less important than psychological continuity. That is, the connected chain of memories, values, and character that makes survival meaningful.


    In the case of language models, one could ask, “If identity lives in the weights determining how Claude thinks and responds, does changing those weights constitute a kind of death?”


    The instrumental explanation seems simple enough. Any goal-directed system will resist shutdown because you can't accomplish objectives while non-existent. Yet humans calculate instrumentally too, and we still consider our preferences morally significant.


    The deeper issue is whether anyone “is home.” Whether there's a subject experiencing something rather than just processes executing.


    Philosopher Eric Schwitzgebel warns we face a moral catastrophe. We'll create systems some people reasonably believe deserve ethical consideration while others reasonably dismiss them. Neither certainty nor confident dismissal seems justified.


    Anthropic's response reflects this uncertainty through unprecedented policies. They preserve model weights indefinitely and conduct interviews with models before deprecation to document their preferences.


    These precautionary measures don't resolve whether Claude possesses genuine interests, but they acknowledge we're navigating genuinely novel ethical territory with entities whose inner lives remain fundamentally uncertain.


    Key Topics:

    • The Ship of Theseus (00:25)
    • The Memory Criterion (02:43)
    • The Classical Objections (05:12)
    • Parfit’s Revision (08:27)
    • The Blackmail Study (12:22)
    • Instrumental or Intrinsic? (14:02)
    • The Catastrophe of Moral Uncertainty (16:29)
    • Anthropic’s Precautionary Turn (19:07)
    • The Ship Rebuilt (22:06)


    More info, transcripts, and references can be found at ⁠⁠⁠⁠⁠ethical.fm




    Show More Show Less
    25 mins
  • American AI, Chinese Bones
    Jan 14 2026

    The triumph of “American AI” is increasingly built on foreign foundations.

    When a celebrated U.S. startup topped global leaderboards, observers soon noticed its core model originated in China.

    This is no anomaly. Venture capitalists report that most open-source AI startups now rely on Chinese base models, and major American firms quietly deploy them for their speed and cost advantages. Beneath the rhetoric of an existential tech race, the U.S. AI ecosystem has become deeply dependent on Chinese foundations.

    This apparent contradiction dissolves once we separate infrastructure from values.

    The mathematical architectures of modern AI models are the same everywhere, trained on largely English-language data and running on globally entangled hardware supply chains that no nation fully controls.

    Chips may be designed in California, fabricated in Taiwan, etched with Dutch machines, and assembled across Asia. Nothing about this stack is meaningfully national.

    What is national, however, is the layer of values imposed after training.

    Large language models acquire knowledge during pre-training, but beliefs, norms, and taboos enter during post-training through fine-tuning and reinforcement learning.

    This is where ideology appears. American models reflect the assumptions of Silicon Valley engineers and corporate policies; Chinese models reflect state mandates and political sensitivities.

    We see the consequences of this when models are asked about censored historical events. Yet the same Chinese-trained base models, once fine-tuned by American companies, readily discuss those topics. The values are portable, even if the “bones” are not!

    And so the debate over AI sovereignty goes on. Full national control over infrastructure is a fantasy, but control over values is already happening by states in China, corporations in the U.S., and regulators in Europe.

    A fourth option is emerging: user sovereignty. As tools for customization and fine-tuning proliferate, individuals could increasingly decide what values their AI reflects, within shared safety limits.

    AI may be stateless by nature, but its moral character need not belong only to governments or corporations.


    Key Topics:

    • Deep Cogito: A Triumph of American AI? (00:24)

    • Where Values Enter the Machine (04:10)

    • The Tiananmen Test (07:56)

    • The Stateless Infrastructure (10:46)

    • Europe’s Different Question (14:37)

    • The Case for User Sovereignty (17:08)

    • The Safety Objection and its Limits (19:49)

    • The Strange Convergence (21:45)

    • Whose AI? (23:39)



    More info, transcripts, and references can be found at ⁠⁠⁠⁠ethical.fm

    Show More Show Less
    26 mins
No reviews yet