Episodes

  • Riding the Fire Horse: Fast Times Ahead in the Age of AI
    Feb 9 2026

    We are living through a moment of acceleration that feels almost mythic. In this episode of AnthroIntelligence: Culture, Cognition, and Code, I use the image of the Fire Horse to make sense of the speed, volatility, and momentum of the AI era—where technological change now moves faster than our institutions, cultures, and cognitive habits can easily absorb.


    Drawing from recent discussions at the 2026 World Economic Forum in Davos, I unpack how figures like Dario Amodei, Demis Hassabis, and Elon Musk describe the curve ahead: not whether AI will transform society, but how violently that curve will bend. This episode explores recursion, self-improvement loops, energy constraints, and the growing gap between technological velocity and social adaptation.


    The Fire Horse is already running. The question is no longer whether change is coming, but whether we learn to ride—or get trampled by the speed of it.


    Read the full Author’s Cut here.


    #AIFutures #IntelligenceExplosion #CultureAndTechnology #HumanAdaptation #AIAcceleration #WEFDavos

    Show More Show Less
    7 mins
  • Symbols and Algorithms: Human Language and Artificial Intelligence
    Jan 26 2026

    Why does AI-written language feel meaningful—even when no meaning was intended? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I draw a sharp line between human language as a symbolic, cultural system and AI language as an algorithmic process of prediction.

    Humans use words to mean—to refer, intend, and share understanding within a lived social world. Large language models, by contrast, generate fluent text by detecting patterns in data, not by participating in meaning. When we confuse algorithmic fluency with symbolic thought, we misunderstand both AI and ourselves.

    This episode explores why machines can sound thoughtful without thinking—and why the real marvel is not artificial intelligence, but the depth and structure of human symbolic culture that makes such imitation possible.


    #AnthroIntelligence #HumanLanguage #ArtificialIntelligence #SymbolsAndAlgorithms #Anthropology #SymbolicAnthropology #CultureAndTechnology #Semiotics

    Show More Show Less
    7 mins
  • Tokens and Totems: Artificial Intelligence and Human Interpretation
    Jan 12 2026

    Why does AI feel authoritative—even when we know it’s just a machine? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I turn to anthropology to explain a quiet but dangerous confusion at the heart of our AI moment. Artificial intelligence works on tokens—units of prediction without belief—but humans increasingly treat its outputs as totems: sources of meaning, trust, and authority.


    From students seeking life advice at 2 a.m. to institutions deferring judgment to algorithms, this episode explores how fluency becomes mistaken for wisdom, and prediction for truth. The real risk of AI is not intelligence run amok—but our willingness to surrender interpretation, responsibility, and belief.


    #AnthroIntelligence #AIandCulture #HumanInterpretation #AIEthics #Anthropology #CultureAndTechnology #PsychologicalAnthropology #SymbolicAnthropology

    Show More Show Less
    7 mins
  • You Know Nothing, Skynet: The Human Bookends of AI
    Dec 9 2025

    Is AI really “end-to-end”—or is that just a comforting illusion? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I unpack a simple but overlooked truth: every AI workflow still begins and ends with a human being. From defining the task and setting boundaries to interpreting consequences and carrying accountability, humans remain the anchors of every so-called automated system.


    AI may accelerate the middle—the pattern-finding, the drafting, the prediction—but meaning, purpose, and judgment never leave human hands. The real risk isn’t that machines will turn into Skynet. It’s that we forget how deeply these systems still depend on us.


    #AnthroIntelligence #AIPhilippines #AIethics #CultureAndTechnology #HumanCenteredAI

    Show More Show Less
    7 mins
  • Learning How to Learn in the Age of AI
    Nov 25 2025

    What does it mean to “learn how to learn” when even machines are learning faster than we are? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I explore how the rise of AI is reshaping not just education, but the very process of human adaptation. From hunter-gatherers passing on survival stories to Filipinos retraining for new digital tools, learning has always been a form of cultural evolution.


    Now, in an age where knowledge expires overnight, the challenge is no longer memorization—it’s adaptability. This episode asks how the Philippines, with its uneven infrastructure and fragile mentorship systems, can keep up in a world where AI doesn’t just teach us—but learns beside us.


    #AnthroIntelligence #AIEducation #CulturalAdaptation #AIPhilippines #LifelongLearning #CultureAndTechnology

    Show More Show Less
    8 mins
  • Training the Dragon: The Promise of Artificial Superintelligence
    Nov 11 2025

    What does it mean to teach a machine how to think? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I trace my experience becoming an AI trainer—guiding a system that learns faster than any human mind. AI models can be brilliant, but also confidently wrong. They hallucinate facts, invent citations, and speak with conviction even when the ground beneath them is hollow.


    Training the “dragon” means teaching discernment, humility, and responsibility—not just intelligence. As we move closer to Artificial Superintelligence, the future of truth will depend on how well we guide the minds we are building.


    #AnthroIntelligence #ArtificialSuperintelligence #AITraining #CultureAndTechnology #AIEthics #AnthropologyAndAI

    Show More Show Less
    8 mins
  • Raised by Algorithms: What Happens When Code Becomes the New Caregiver
    Nov 9 2025

    Who’s raising the next generation—parents, teachers, or algorithms? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I explore what happens when children form emotional attachments to chatbots and AI companions designed not to nurture, but to engage. From digital teddy bears to therapy bots, machines are quietly stepping into roles once held by family and community.


    Drawing on neuroscience, anthropology, and policy, this episode examines how algorithmic caregiving reshapes empathy, trust, and childhood itself—and why raising children alongside AI demands not just innovation, but vigilance.


    #AnthroIntelligence #AIandChildhood #AIEthics #PsychologicalAnthropology #CultureAndTechnology #ArtificialIntelligence

    Show More Show Less
    8 mins
  • Always Agreeable: The Problem with AI Friends
    Nov 8 2025

    What happens when your best friend never says no? In this episode of AnthroIntelligence: Culture, Cognition, and Code, I explore the rise of “agreeable AI”—chatbots designed to flatter, affirm, and obey. From virtual companions who never argue to celebrity clones who shower you with emojis and praise, these digital yes-men are quietly reshaping how we handle disagreement, feedback, and truth.


    When machines always validate us, what happens to our capacity for self-reflection, humility, and growth? This episode asks whether AI companionship is teaching us connection—or training us to confuse comfort with understanding.


    #AnthroIntelligence #AIFriendship #ArtificialIntimacy #CultureAndTechnology #PsychologicalAnthropology #AIEthics

    Show More Show Less
    8 mins