The Other AI: Audio Briefings on Augmented Intelligence and AI Governance cover art

The Other AI: Audio Briefings on Augmented Intelligence and AI Governance

The Other AI: Audio Briefings on Augmented Intelligence and AI Governance

Written by: Basil C. Puglisi
Listen for free

About this listen

The Other AI turns Basil C. Puglisi's articles, white papers, and policy briefs into audio briefings on AI governance, augmented intelligence, human judgment, and human-AI collaboration. The format is built for the time and conditions in which people actually learn, whether running, driving, riding a train, or working on something else.

Episodes are AI-narrated for clean, consistent production, and human review approves each publication before release. The complete original work, including details, sources, and citations, lives at basilpuglisi.com.

Topics include HAIA-RECCLIN, Factics, Checkpoint-Based Governance, enterprise AI adoption, AI policy, cognitive enhancement, and the future of human authority over automated systems.

This podcast is for executives, researchers, consultants, educators, policy thinkers, and AI practitioners who want more than AI hype. The show focuses on evidence, dissent, governance, measurable outcomes, and the role of human judgment when machines become more capable.

Basil Puglisi 2009
Politics & Government
Episodes
  • The AI Governance Pattern Hiding in the Senate EdTech Hearing: The Horvath Case Study
    May 14 2026

    The Other AI is about Augmented Intelligence and AI Governance, which means it is also about every other governance failure that looks like AI but is not. This briefing is one of those.

    In January 2026, four credentialed expert witnesses testified before the U.S. Senate Commerce Committee on the impact of technology in classrooms. Did their oral testimonies match their own published cognitive research?

    This briefing covers Basil C. Puglisi's white paper, "The Horvath Case Study: Method Governance and Consensus Drift." Four expert witnesses (Dr. Jared Cooney Horvath, Jean Twenge, Emily Cherkin, and Jenny Radesky) produced contradictory positions across audiences that reached the legislative record selectively. The artificial expert consensus produced in one hearing room is now shaping state policy, with Missouri House Bill 2230 as the documented case and federal legislation including the Kids Off Social Media Act following the same pattern.

    The governance question this briefing surfaces: when credentialed experts deliver a unified position that contradicts their own published research, what mechanism catches it? The cognitive science field did not. The Senate hearing record did not. The legislative record did not. The same failure mode applies to AI deployment when credentialed actors make claims that outpace the evidence and the institutions tasked with verification do not verify.

    Key topics:

    The Four-Artifact Drift. A forensic look at how Dr. Horvath's stance shifted across his book, a podcast, his written testimony, and his viral oral testimony, where he abandoned his own methods-governance concessions in favor of an unhedged ultimatum rooted in biological-mechanism framing.

    The Witness Drift Map. A comparison of the published research of witnesses Twenge, Cherkin, and Radesky against the binary device-removal consensus they delivered together in the hearing room.

    The WEIRD Bias. How the 2010 drop in global standardized test scores aligns with the 2010 WEIRD bias critique, suggesting the data used to justify restricting technology may be a measurement artifact of culturally biased testing instruments.

    Method Governance. Why cognitive science indicates that method governance, a structured approach requiring active cognitive demand, outcome evidence, and named human accountability, is the actual answer to classroom tech deployment rather than binary bans. The same principle applies to AI deployment.

    Read the original white paper and view the full artifact analysis: https://basilpuglisi.com/how-credentialed-testimony-outpaces-research-horvath-case-study/

    Disclaimer: This audio briefing was generated by NotebookLM as an AI-produced overview based on the full white paper. The underlying paper is human-authored with AI assistance (#AIassisted) and is the canonical source. Verify quotes and analytical positions against the canonical paper at the link above.

    Show More Show Less
    20 mins
  • Mo Gawdat's AI Dystopia Is Not Inevitable
    May 12 2026

    Welcome to this episode of The Other AI. Today, we are breaking down a critical analysis of former Google [X] executive Mo Gawdat’s recent AI predictions, drawing from Basil C. Puglisi’s latest governance paper, "The Inevitable Is a Choice".

    Across two recent podcast interviews, Gawdat warned of a "Fourth Inevitable"—an unavoidable 12 to 15 years of dystopia featuring mass unemployment, surveillance, and consent erosion before AI supposedly becomes benevolent enough to save us. But is this dystopian cascade a required transit corridor, or is it a structural failure we can prevent?

    In this episode, we cover:

    • What Gawdat gets right: We explore his incredibly accurate operational observations, including his personal multi-AI cross-checking habit to catch hallucinations, the reality of "cognitive amplification" (using AI to extend human capacity rather than replace it), and the documented contraction of entry-level tech hiring.
    • Where the "Fourth Inevitable" fails: We challenge Gawdat’s deterministic prediction that competitive pressure makes unchecked AI deployment unstoppable. His forecast treats the absence of current oversight infrastructure as proof that no infrastructure is possible.
    • The Benevolent AI Contradiction: We unpack the flaw in assuming that we must simply survive a decade of hell until AI becomes smart enough to override greedy humans.
    • The Governance Choice Point: We map out the exact open-source architecture designed to interrupt the deployment cascade, including:
      • HAIA-CAIPR: A formal protocol for cross-platform review that scales Gawdat's personal multi-AI habit.
      • AI Provider Plurality: Mandates to prevent single-vendor lock-in at high-stakes decision points.
      • Checkpoint-Based Governance (CBG): Ensuring named human arbiters hold binding authority over AI outputs.
      • VAISA: The proposed Verified AI Inference Standards Act to enforce statutory accountability.

    Key Takeaway: The 12 to 15 years of hell Gawdat predicts is not inevitable; it is contingent on us failing to build oversight infrastructure. Dystopia is what happens without infrastructure, and the inevitable is actually a choice.

    Read the full paper: "The Inevitable Is a Choice" by Basil C. Puglisi, MPA at https://basilpuglisi.com/mo-gawdat-inevitable-choice/ or on SSRN. Explore the HAIA framework: github.com/basilpuglisi/HAIA

    This is #AIgenerated by NotebookLM from basil original paper for audio learners.

    Show More Show Less
    23 mins
  • Empire of Evidence: Testing Karen Hao's 9 AI Claims Against Governance Infrastructure
    May 10 2026

    Investigative journalist Karen Hao spent eight years and conducted over 300 interviews examining the AI industry. Her book Empire of AI won the National Book Critics Circle Award for Nonfiction, reached the New York Times bestseller list, and earned her a place on TIME's TIME100 AI list. In her March 2026 interview on The Diary of a CEO, she made nine specific claims about how major AI companies operate.

    This episode is an audio examination of all nine claims, testing each against available evidence and mapping the strongest findings to published open-source AI governance architecture.

    Five claims held under scrutiny:

    Knowledge Production Control. AI companies fund the scientists who study their own systems and censor researchers who produce inconvenient findings. Google fired AI ethics co-leads Dr. Timnit Gebru and Margaret Mitchell. Congress cited Hao's reporting five times.

    AGI Definition Shifting. OpenAI describes artificial general intelligence differently depending on the audience. The OpenAI Charter, the Microsoft contractual threshold, the Congressional framing, and the consumer marketing describe fundamentally incompatible systems.

    Revenue-Driven Capability Selection. Internal documents show companies advance capabilities based on which industries pay the most, not on scientific priority.

    Data Annotation Labor Conditions. The annotation industry absorbs displaced workers and drives conditions downward through structural competition on speed and cost.

    Environmental Externalities. AI data centers consume massive resources. The Memphis Colossus facility runs on 35 gas turbines. Hao acknowledged a 1,000x unit error on one Chilean water figure, but the broader environmental reporting remains substantiated.

    Four claims required challenge:

    The Empire Analogy works as a structural lens but breaks down at literal comparison with colonial empires that enforced power through military violence.

    Self-Driving Car Predictions. Waymo reports 92% fewer serious-injury crashes, but those miles are in five US cities under mapped conditions. New York City rush hour, Bangkok traffic, and unpaved mountain roads in Peru would produce fundamentally different data.

    Bicycles vs. Rockets. AlphaFold was built by Google DeepMind on Google's TPU clusters. The "bicycle" came from the same corporate infrastructure Hao critiques.

    Intelligence Scaling. The mechanism debate is real, but measurable capability improvements in coding, reasoning, and planning are not hypothetical.

    The examination maps findings to AI Provider Plurality, the Economic Override Pattern, the Constitutional Wall Principle, and Multi-Provider Divergence through HAIA-CAIPR. All are published working concepts, not production-validated systems. Other governance approaches may address the same structural problems.

    Full white paper with complete sources and APA references: https://basilpuglisi.com/empire-of-evidence-testing-karen-hao-claims-governance-infrastructure/

    Karen Hao's Diary of a CEO interview: https://www.youtube.com/watch?v=Cn8HBj8QAbk

    Open-source governance frameworks: https://github.com/basilpuglisi/HAIA

    AI Content Disclosure: This audio was generated by Google NotebookLM from the published article. NotebookLM audio cannot be edited after generation. The guidance instructions provided beforehand are the only editorial control available. Proper noun pronunciation varies in AI-generated audio.

    #AIassisted using the HAIA Ecosystem

    Show More Show Less
    14 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet