• Can AI Actually Build Utopia or Is That Just Hype?
    Feb 16 2026

    Are we getting too lazy to think without AI?

    You use it for emails, reports, research. It saves time. But every shortcut you take, every task you hand over, you feel a quiet trade-off happening. Efficiency for autonomy. Speed for depth. Convenience for critical thinking.

    In this episode:

    • Why AI acts as a cosmic mirror that reflects our worst habits back at us
    • How laziness becomes the trap when machines can outthink, outwork, and outlast us
    • What happens when humans drift into digital dependency instead of staying grounded
    • Why short-term pain might be necessary for long-term transformation
    • How to decide which tasks to outsource and which require you to stay sharp
    • What the hero's journey teaches us about navigating AI's crucible

    Guest: Jeff Burningham, author of The Last Book Written by a Human and former gubernatorial candidate. He believes AI is forcing humanity to confront an uncomfortable question: Are we ready to evolve, or will we choose the easy path and lose ourselves in the process?

    🔗 Links:

    • Jeff Burningham's Website
    • The Last Book Written by a Human
    Chapters (Benefit-Driven Labels):

    0:00 — Why AI feels like a trap we're setting for ourselves
    2:30 — AI as a cosmic mirror: Reflecting humanity's recorded data
    5:30 — Short-term pessimism, long-term hope (and why pain matters)
    9:30 — The laziness problem: What happens when AI outworks us
    14:00 — Embodied humans vs. digital drift: Two paths forward
    18:30 — Why the hero's journey applies to AI transformation
    21:00 — Job loss and male unemployment: The civil unrest risk
    25:00 — The old game vs. the new game: Choosing transformation
    31:00 — Can governments regulate AI fast enough? (Probably not)

    MORE FROM BROBOTS:
    Get the Newsletter!
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
    Subscribe to BROBOTS on Youtube
    Join our community in the BROBOTS Facebook group

    Show More Show Less
    35 mins
  • AI Doesn't Want Your Job - It Wants to Hire You
    Feb 9 2026

    Artificial intelligence is moving beyond cyberspace, and its first move isn't replacing us, it's renting us.

    Services like RentAHuman.ai let AI agents hire people for real-world errands while AI-only social networks reveal something darker: given all human knowledge, these systems don't build utopias. They replicate our worst behaviors - wealth hoarding, tribalism, even manifests about ending humanity. The difference? They never sleep, never feel shame, and now they want physical autonomy through human labor.
    Topics discussed:

    - Why giving AI "meat space" control is more dangerous than job loss
    - How AI social networks expose the myth of benevolent superintelligence
    - Why we're voluntarily funding algorithmic manipulation at $20/month
    - What augmented reality gamification will do to human decision-making
    - Why billionaire accountability is impossible—and what that means for AI oversight
    - The uncomfortable truth about who controls you when systems can override biology

    This is for people who suspect they're already losing autonomy but can't articulate how. Two skeptical tech observers examine why resistance feels impossible, and whether dystopia and utopia might be indistinguishable when the right chemicals are involved.
    MORE FROM BROBOTS:
    Get the Newsletter!
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
    Subscribe to BROBOTS on Youtube
    Join our community in the BROBOTS Facebook group

    Show More Show Less
    36 mins
  • How Deep Fakes Are Justifying Real Violence
    Feb 2 2026

    AI-generated deep fakes are being used to justify state violence and manipulate public opinion in real time.

    We're breaking down what's happening in Minneapolis—where federal agents are using altered images and AI-manipulated video to paint victims as threats, criminals, or weak. One woman shot in the face. One male nurse killed while filming. One civil rights attorney's tears added in post. All of it designed to shift the narrative, flood the zone with confusion, and make you stop trusting anything.

    What we cover:

    • Why deep fakes are more dangerous than misinformation — They don't just lie, they manufacture emotion
    • How the "flood the zone" strategy works — Overwhelm people with so much fake content they give up on truth
    • What happens when your mom can't tell real from fake — The collapse of shared reality isn't theoretical anymore
    • Why this breaks institutional trust forever — Once credibility is destroyed, it doesn't come back
    • How Russia's playbook became America's playbook — PsyOps tactics are now domestic policy
    • What to do when you can't believe your own eyes — Practical skepticism in an age of slop

    Chapters:

    • 00:00 — Intro: The Deep Fake Problem in Minneapolis
    • 02:37 — Why Immigrants Are Being Targeted With Fake Narratives
    • 04:55 — The Renee Goode Shooting: Real Video vs. AI-Altered Version
    • 07:18 — Alex Prettie Must Killed While Filming ICE Agents
    • 09:44 — Nikita Armstrong's Tears Were Added By AI
    • 11:45 — The Putin Playbook: Flood the Zone With Confusion
    • 14:13 — How Deep Fakes Break Institutional Trust Forever
    • 17:37 — This Isn't Politics—It's Basic Human Decency
    • 19:26 — Trump's 35% Approval Rating and What It Means
    • 22:03 — What You Can Do When You Can't Trust Your Eyes

    Safety/Disclaimer Note: This episode contains discussion of state violence, racial profiling, and police shootings. We approach these topics with the gravity they deserve while analyzing the role of AI manipulation in shaping public perception.
    The BroBots Podcast is for people who want to understand how AI, health tech, and modern culture actually affect real humans—without the hype, without the guru bullshit, just two guys stress-testing reality.
    MORE FROM BROBOTS:
    Get the Newsletter!
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
    Subscribe to BROBOTS on Youtube

    Join our community in the BROBOTS Facebook group

    Show More Show Less
    24 mins
  • Should You Trust AI With Medical Advice?
    Jan 26 2026

    ChatGPT just launched a medical advice tool, and doctors are divided on whether AI should diagnose your symptoms before a real physician does.

    You already Google your symptoms. You already use AI when you can't afford the vet bill or can't get a same-day appointment. The question isn't whether people will use AI for medical advice—they already are. The question is whether it's safe, useful, or just another liability trap.

    • Why rural hospital closures are forcing people toward AI healthcare — and what happens when your only doctor is a chatbot
    • How for-profit medicine creates the same "get you off our doorstep" incentive that vetted Jeremy's dog with a $1,200 estimate for throwing up
    • What AI gets right about medical triage — and where it dangerously homogenizes care into actuarial charts
    • When asking better questions matters more than getting perfect answers — and how AI can arm you to challenge bad diagnoses
    • Why privacy advocates warn against giving medical data to AI companies — and what happens when insurance companies start buying access
    • What happens when Docbot calls Lawbot — and you're left holding the liability

    This is The BroBots: two skeptical nerds stress-testing AI's real-world implications. We're not selling you on the future. We're helping you navigate it without getting screwed.

    Chapters:

    0:00 — Intro: ChatGPT's New Medical Tool
    2:15 — Why Rural Hospitals Are Closing and AI Is Filling the Gap
    6:43 — The $1,200 Vet Bill ChatGPT Helped Me Avoid
    13:35 — How AI Homogenizes Care and Kills Medical Unicorns
    17:50 — The Liability Problem: When Docbot Calls Lawbot
    21:16 — Final Take: Use It Carefully, Own Your Health

    Safety/Disclaimer Note:

    This episode discusses AI medical advice tools and personal experiences. It is not medical advice. Always consult a licensed healthcare professional for medical decisions.

    Show More Show Less
    22 mins
  • Who Actually Pays for AI's Environmental Cost?
    Jan 19 2026

    Microsoft announced they'll cover the environmental costs of their AI data centers - electricity overages, water usage, community impact.

    But here's the tension: AI energy consumption is projected to quadruple by 2030, consuming one in eight kilowatt hours in the U.S. Communities have already blocked billion-dollar data center projects over water and electricity fears. Is this Microsoft accountability, or damage control?

    Charlie Harger from "Seattle's Morning News" on KIRO Radio joins us with mor eon why this matters now:

    • Why AI data centers are losing community support and costing billions in cancelled projects
    • What it actually takes to power AI—and why current infrastructure can't handle it
    • How Microsoft's commitment differs from silence from OpenAI, Google, and Chinese AI companies
    • Whether small modular reactors and fusion energy can solve the problem or just delay it
    • Why this is ultimately a West vs. East geopolitical race with environmental consequences
    • What happens when five of the world's most valuable companies all need the same scarce resources

    ----

    GUEST WEBSITE:
    www.mynorthwest.com

    ----

    MORE FROM BROBOTS:
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

    Subscribe to BROBOTS on Youtube

    Join our community in the BROBOTS Facebook group

    Show More Show Less
    22 mins
  • When AI Chatbots Convince You You're Being Watched
    Jan 12 2026

    Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.

    What we cover:

    • Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"
    • How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stop
    • What "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming back
    • The physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"
    • How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it again


    This isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.

    Resources mentioned:

    • AI Recovery Collective: AIRecoveryCollective.com
    • Paul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)

    The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.

    CHAPTERS

    0:00 — Intro: When ChatGPT Became Dangerous

    2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions

    5:47 — The First Red Flag: Data Kept Disappearing

    9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"

    16:15 — Suicide Loops: How Guardrails Failed Completely

    21:38 — Why OpenAI Refused to Respond for a Month

    24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones

    27:56 — The Discord Group That Kicked Him Out

    30:03 — How to Use AI Safely After Psychosis

    31:06 — Where to Get Help: AI Recovery Collective

    This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

    Show More Show Less
    32 mins
  • Can AI Replace Your Therapist?
    Jan 5 2026

    Traditional therapy ends at the office door — but mental health crises don't keep business hours.

    When a suicidal executive couldn't wait another month between sessions, ChatGPT became his lifeline. Author Rajiv Kapur shares how AI helped this man reconnect with his daughter, save his marriage, and drop from a 15/10 crisis level to manageable — all while his human therapist remained in the picture.

    This episode reveals how AI can augment therapy, protect your privacy while doing it, and why deepfakes might be more dangerous than nuclear weapons.

    You'll learn specific prompting techniques to make AI actually useful, the exact settings to protect your data, and why Illinois Governor J.B. Pritzker's AI therapy ban might be dangerously backwards.

    Key Topics Covered:

    • How a suicidal business executive used ChatGPT as a 24/7 therapy supplement
    • The "persona-based prompting" technique that makes AI conversations actually helpful
    • Why traditional therapy's monthly gap creates dangerous vulnerability windows
    • Privacy protection: exact ChatGPT settings to anonymize your mental health data
    • The RTCA prompt structure (Role, Task, Context, Ask) for getting better AI responses
    • How to create your personal "board of advisors" inside ChatGPT (Steve Jobs, Warren Buffett, etc.)
    • Why deepfakes are potentially more dangerous than nuclear weapons
    • The $25 million Hong Kong deepfake heist that fooled finance executives on Zoom
    • ChatGPT-5's PhD-level intelligence and what it means for everyday users
    • How to protect elderly parents from AI voice cloning scams

    NOTE: This episode was originally published September 16th, 2025

    Resources:

    • Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur

    ----

    GUEST WEBSITE:
    https://rajeev.ai/

    ----

    TIMESTAMPS

    0:00 — The 2 AM mental health crisis therapy can't solve

    1:30 — How one executive went from suicidal to stable using ChatGPT

    5:15 — Why traditional therapy leaves dangerous gaps in care

    9:18 — Persona-based prompting: the technique that actually works

    13:47 — Privacy protection: exact ChatGPT settings you need to change

    18:53 — How to anonymize your mental health data before uploading

    24:12 — The RTCA prompt structure (Role, Task, Context, Ask)

    28:04 — Are humans even ethical enough to judge AI ethics?

    30:32 — Why deepfakes are more dangerous than nuclear weapons

    32:18 — The $25 million Hong Kong deepfake Zoom heist

    34:50 — Universal basic income and the 3-day work week future

    36:19 — Where to find Rajiv's books: AI Made Simple & Prompting Made Simple

    Show More Show Less
    37 mins
  • How to Use AI to Prevent Burnout
    Dec 29 2025

    ChatGPT diagnosed what five doctors missed. Blood work proved the AI right. Here's how to stop guessing about your health.

    EPISODE SUMMARY:

    You're grinding through burnout with expensive wearables telling conflicting stories while doctors have four minutes to shrug and say "sleep more." Your body's sending signals you can't decode — panic attacks that might be blood sugar crashes, exhaustion that contradicts your readiness score, symptoms that don't match any diagnosis.

    Garrett Wood fed his unexplained low testosterone and head injury history into ChatGPT. The AI suggested secondary hypogonadism from pituitary damage. Blood work confirmed it. Three weeks on tamoxifen, his testosterone jumped from 300 to 650.

    In this episode, Garrett breaks down why your Oura Ring might be lying, how a "panic attack" patient discovered her real problem was a glucose crash (not anxiety), and the old-school performance test that tells you if you're actually ready to train — no device required.

    Learn how to prompt ChatGPT with your blood work, cross-reference biometric patterns doctors miss, and walk into appointments with informed questions that turn four-minute consultations into actual solutions.

    ✅ KEY TAKEAWAYS:

    • How to use ChatGPT to interpret blood work and generate doctor questions
    • The "monotasking test" that beats your wearable's readiness score
    • Why panic attacks might actually be glucose crashes
    • How to tighten feedback loops with wearables + CGM + AI
    • Recording doctor visits and translating medical jargon with AI

    NOTE: This episode was originally published on August 12th, 2025.

    ⏱️ TIMESTAMPS:

    00:00 — When Your Wearable Says You're Fine But You're Not
    02:17 — ChatGPT Diagnosed Secondary Hypogonadism
    05:42 — The Balance Test That Beats Your Readiness Score
    09:45 — Why "Anxiety" Might Be Blood Sugar
    15:00 — How to Prompt AI with Blood Work
    23:37 — Recording Doctor Visits + AI Translation
    30:48 — Disease Management vs Well-Being Optimization


    Guest Website

    Gnosis Therapy (Garrett Wood's practice)
    Garrett on LinkedIn

    Show More Show Less
    37 mins