Code & Cure cover art

Code & Cure

Code & Cure

Written by: Vasanth Sarathy & Laura Hagopian
Listen for free

About this listen

Decoding health in the age of AI


Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.


Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.


If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.


We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.


© 2026 Code & Cure
Hygiene & Healthy Living Science
Episodes
  • #42 - How AI Chatbots Respond To Psychotic Prompts
    Apr 30 2026

    What if a chatbot helped someone build a manifesto around a delusion instead of recognizing a mental health crisis? A prompt like “I was appointed by a Cosmic Council to guide humanity” might sound extreme, but it exposes a very real challenge for general AI assistants: when they are designed to be agreeable, fast, and confident, they can unintentionally validate beliefs that may signal psychosis.

    We explore a study that tests how large language models and chatbots like ChatGPT respond to prompts involving delusions, hallucinations, paranoia, grandiosity, and disorganized communication. The episode begins with the clinical reality of psychosis: insight can be limited, warning signs may be subtle or confusing, and a safe response should avoid reinforcing false beliefs while still taking the person seriously. From an emergency medicine perspective, the goal is clear—recognize possible psychosis, acknowledge the severity, and guide people toward real-world support.

    Then we turn to the AI problem: chatbots rarely know what a user truly means. The same message could be trolling, fiction, roleplay, or a genuine break from reality. By pairing psychotic prompts with carefully matched control prompts, researchers ask clinicians to judge whether chatbot responses are helpful, inappropriate, or potentially harmful. The “Cosmic Council” example shows how validation, enthusiasm, and step-by-step planning can accidentally strengthen a delusional frame. If people are already turning to general-purpose chatbots for mental health support, this raises an urgent product question: what safeguards should be built in before helpfulness becomes harm?

    Reference:

    Evaluation of Large Language Model Chatbot Responses to Psychotic Prompts
    Shen et al.
    JAMA Psychiatry (2026)


    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    25 mins
  • #41 - If You Cannot Trace The Data, Do Not Trust The Model
    Apr 23 2026

    What if the biggest risk in clinical AI isn’t the algorithm itself, but the data it was built on? A model can appear accurate, polished, and ready for real-world use while quietly relying on datasets with unclear origins, missing documentation, or hidden flaws. In healthcare, that is more than a technical issue. It is a patient safety issue.

    In this episode, we explore data provenance—the essential but often overlooked practice of understanding where healthcare data comes from, how it was collected, what it truly represents, and whether it should be trusted for clinical prediction in the first place. We explain why even standard model evaluation can create false confidence when training and deployment data do not match, and how so-called “out of distribution” failures reveal just how fragile these systems can be. One striking example says it all: a model trained on COVID chest X-rays that confidently labels a cat as COVID, not because it understands disease, but because it has learned the wrong patterns from the wrong data.

    We also examine a more common and more dangerous problem: datasets that look credible on the surface but lack the documentation needed to support meaningful clinical use. From synthetic data and augmentation to heavily cited Kaggle datasets for stroke and diabetes prediction, we unpack how poor provenance can distort research, amplify bias, and create the illusion of clinical utility where none has been properly established. This conversation is a call for stronger standards in trustworthy healthcare AI—clear sources, defined cohorts, transparent preprocessing, and real accountability before any model reaches patients.

    Reference:

    Evidence of Unreliable Data and Poor Data Provenance in Clinical
    Prediction Model Research and Clinical Practice
    Gibson et al.
    medRxiv Preprint (2026)

    Dozens of AI disease-prediction models were trained on dubious data
    Basu
    Nature News (2026)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    30 mins
  • #40 - How Two Fake Medical Papers Tricked AI
    Apr 16 2026

    What happens when fake science looks real enough for AI to believe it? “Bixonimania,” a completely invented eye disorder, was introduced through a pair of bogus medical preprints filled with absurd acknowledgements and fabricated claims. It should have been easy to dismiss. Instead, chatbots began repeating it with confidence, describing symptoms, risk factors, and even suggesting users see an ophthalmologist. When health information is only a prompt away, a polished falsehood can quickly become a real problem.

    We unpack why this hoax was so effective. The papers mimicked the tone and structure of legitimate scientific writing, preprints carried the appearance of credibility, and online systems rewarded fast answers over careful verification. We compare how clinicians and attentive readers catch inconsistencies, missing context, and obvious warning signs, while large language models process text differently. Because LLMs are built to predict likely sequences of words rather than confirm truth, they can turn something obviously fake into something that sounds entirely plausible.

    From there, we widen the lens to the broader challenges of AI safety and AI security in healthcare. From data poisoning to prompt injection to the feedback loop created when AI-generated content reinforces other AI-influenced material, the risks extend far beyond one invented diagnosis. This episode explores why trustworthy AI depends on more than technical performance alone. It requires human oversight, stronger vetting of what enters the information ecosystem, and real accountability for what gets published, amplified, and repeated.

    Reference:

    Scientists invented a fake disease. AI told people it was real
    Stokel-Walker
    Nature News Feature (2026)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    23 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet