Your Undivided Attention cover art

Your Undivided Attention

Your Undivided Attention

Written by: The Center for Humane Technology Tristan Harris Daniel Barcay and Aza Raskin
Listen for free

About this listen

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.2019-2025 Center for Humane Technology Political Science Politics & Government Relationships Social Sciences
Episodes
  • The Race to Build God: AI's Existential Gamble — Yoshua Bengio & Tristan Harris at Davos
    Feb 19 2026

    This week on Your Undivided Attention, Tristan Harris and Daniel Barcay offer a backstage recap of what it was like to be at the Davos World Economic Forum meeting this year as the world’s power brokers woke up to the risks of uncontrolled AI.

    Amidst all the money and politics, the Human Change House staged a weeklong series of remarkable conversations between scientists and experts about technology and society. This episode is a discussion between Tristan and Professor Yoshua Bengio, who is considered one of the world’s leaders in AI and deep learning, and the most cited scientist in the field.

    Yoshua and Tristan had a frank exchange about the AI we’re building, and the incentives we’re using to train models. What happens when a model has its own goals, and those goals are ‘misaligned’ with the human-centered outcomes we need? In fact this is already happening, and the consequences are tragic.

    Truthfully, there may not be a way to ‘nudge’ or regulate companies toward better incentives. Yoshua has launched a nonprofit AI safety research initiative called Law Zero that isn't just about safety testing, but really a new form of advanced AI that's fundamentally safe by design.

    RECOMMENDED MEDIA

    All the panels that Tristan and Daniel did with Human Change House

    LawZero: Safe AI for Humanity

    Anthropic’s internal research on ‘agentic misalignment’

    RECOMMENDED YUA EPISODES

    Attachment Hacking and the Rise of AI Psychosis

    How OpenAI's ChatGPT Guided a Teen to His Death

    What if we had fixed social media?

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    CORRECTIONS AND CLARIFICATIONS

    1) In this episode, Tristan Harris discussed AI chatbot safety concerns. The core issues are substantiated by investigative reporting, with these clarifications:

    Grok: The Washington Post reported in August 2024 that Grok generated sexualized images involving minors and had weaker content moderation than competitors.

    Meta: The Wall Street Journal reported in December 2024 that Meta reduced safety restrictions on its AI chatbots. Testing showed inappropriate responses when researchers posed as 13-year-olds (Meta's minimum age). Our discussion referenced "eight year olds" to emphasize concerns about young children accessing these systems; the documented testing involved 13-year-old personas.

    Bottom line: The fundamental concern stands—major AI companies have reduced safety guardrails due to competitive pressure, creating documented risks for young users.

    2) There was no Google House at Davos in 2026, as stated by Tristan. It was a collaboration at Goals House.

    3) Tristan states that in 2025, the total funding going into AI safety organizations was “on the order of about $150 million.” This number is not strictly verifiable.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show More Show Less
    37 mins
  • FEED DROP: Possible with Reid Hoffman and Aria Finger
    Feb 5 2026

    This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org.

    This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape.

    In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence.

    This is the kind of conversation that needs to happen more in tech. Reid has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.

    You can find "Possible" wherever you get your podcasts! And you can follow Reid on YouTube for more of his content: https://www.youtube.com/@reidhoffman.

    RECOMMENDED MEDIA

    Aza’s first appearance on “Possible”

    The website for Earth Species Project

    “Amusing Ourselves to Death” by Neil Postman

    The Moloch’s Bargain paper from Stanford
    On Human Nature by E.O. Wilson

    Dawn of Everything by David Graber

    RECOMMENDED YUA EPISODES

    The Man Who Predicted the Downfall of Thinking

    America and China Are Racing to Different AI Futures

    Talking With Animals... Using AI

    How OpenAI's ChatGPT Guided a Teen to His Death

    Future-proofing Democracy In the Age of AI with Audrey Tang


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show More Show Less
    1 hr and 7 mins
  • Attachment Hacking and the Rise of AI Psychosis
    Jan 21 2026

    Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.

    The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.

    Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.

    In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.

    If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIPHRC.org.

    This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services

    RECOMMENDED MEDIA

    The website for the AI Psychological Harms Research Coalition

    Further reading on AI Pscyhosis

    The Atlantic article on LLM-ings outsourcing their thinking to AI

    Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”

    RECOMMENDED YUA EPISODES

    How OpenAI's ChatGPT Guided a Teen to His Death
    People are Lonelier than Ever. Enter AI.
    Echo Chambers of One: Companion AI and the Future of Human Connection

    Rethinking School in the Age of AI

    CORRECTIONS

    After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium

    Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.

    Aza referred to CHT as expert witnesses in litigation cases on AI-enabled suicide. CHT serves as expert consultants, not witnesses.






    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show More Show Less
    51 mins
No reviews yet