Investigative journalist Karen Hao spent eight years and conducted over 300 interviews examining the AI industry. Her book Empire of AI won the National Book Critics Circle Award for Nonfiction, reached the New York Times bestseller list, and earned her a place on TIME's TIME100 AI list. In her March 2026 interview on The Diary of a CEO, she made nine specific claims about how major AI companies operate.
This episode is an audio examination of all nine claims, testing each against available evidence and mapping the strongest findings to published open-source AI governance architecture.
Five claims held under scrutiny:
Knowledge Production Control. AI companies fund the scientists who study their own systems and censor researchers who produce inconvenient findings. Google fired AI ethics co-leads Dr. Timnit Gebru and Margaret Mitchell. Congress cited Hao's reporting five times.
AGI Definition Shifting. OpenAI describes artificial general intelligence differently depending on the audience. The OpenAI Charter, the Microsoft contractual threshold, the Congressional framing, and the consumer marketing describe fundamentally incompatible systems.
Revenue-Driven Capability Selection. Internal documents show companies advance capabilities based on which industries pay the most, not on scientific priority.
Data Annotation Labor Conditions. The annotation industry absorbs displaced workers and drives conditions downward through structural competition on speed and cost.
Environmental Externalities. AI data centers consume massive resources. The Memphis Colossus facility runs on 35 gas turbines. Hao acknowledged a 1,000x unit error on one Chilean water figure, but the broader environmental reporting remains substantiated.
Four claims required challenge:
The Empire Analogy works as a structural lens but breaks down at literal comparison with colonial empires that enforced power through military violence.
Self-Driving Car Predictions. Waymo reports 92% fewer serious-injury crashes, but those miles are in five US cities under mapped conditions. New York City rush hour, Bangkok traffic, and unpaved mountain roads in Peru would produce fundamentally different data.
Bicycles vs. Rockets. AlphaFold was built by Google DeepMind on Google's TPU clusters. The "bicycle" came from the same corporate infrastructure Hao critiques.
Intelligence Scaling. The mechanism debate is real, but measurable capability improvements in coding, reasoning, and planning are not hypothetical.
The examination maps findings to AI Provider Plurality, the Economic Override Pattern, the Constitutional Wall Principle, and Multi-Provider Divergence through HAIA-CAIPR. All are published working concepts, not production-validated systems. Other governance approaches may address the same structural problems.
Full white paper with complete sources and APA references: https://basilpuglisi.com/empire-of-evidence-testing-karen-hao-claims-governance-infrastructure/
Karen Hao's Diary of a CEO interview: https://www.youtube.com/watch?v=Cn8HBj8QAbk
Open-source governance frameworks: https://github.com/basilpuglisi/HAIA
AI Content Disclosure: This audio was generated by Google NotebookLM from the published article. NotebookLM audio cannot be edited after generation. The guidance instructions provided beforehand are the only editorial control available. Proper noun pronunciation varies in AI-generated audio.
#AIassisted using the HAIA Ecosystem