Code & Cure cover art

Code & Cure

Code & Cure

Written by: Vasanth Sarathy & Laura Hagopian
Listen for free

About this listen

Decoding health in the age of AI


Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.


Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.


If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.


We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.


© 2026 Code & Cure
Hygiene & Healthy Living Science
Episodes
  • #26 - How Your Phone Keyboard Signals Your State Of Mind
    Jan 8 2026

    What if your keyboard could reveal your mental health? Emerging research suggests that how you type—not what you type—could signal early signs of depression. By analyzing keystroke patterns like speed, timing, pauses, and autocorrect use, researchers are exploring digital biomarkers that might quietly reflect changes in mood.

    In this episode, we break down how this passive tracking compares to traditional screening tools like the PHQ. While questionnaires offer valuable insight, they rely on memory and reflect isolated moments. In contrast, continuous keystroke monitoring captures real-world behaviors—faster typing, more pauses, shorter sessions, and increased autocorrect usage—all patterns linked to mood shifts, especially when anxiety overlaps with depression.

    We discuss the practical questions this raises: How do we account for personal baselines and confounding factors like time of day or age? What’s the difference between correlation and causation? And how can we design systems that protect privacy while still offering clinical value?

    From privacy-preserving on-device processing to broader behavioral signals like sleep and movement, this conversation explores how digital phenotyping might help detect depression earlier—and more gently. If you're curious about AI in healthcare, behavioral science, or the ethics of digital mental health tools, this episode lays out both the potential and the caution needed.

    Reference:

    Effects of mood and aging on keystroke dynamics metadata and their diurnal patterns in a large open-science sample: A BiAffect iOS study
    Claudia Vesel et al.
    J Am Med Inform Assoc (2020)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/


    Show More Show Less
    20 mins
  • #25 - When Safety Slips: Prompt Injection in Healthcare AI
    Jan 1 2026

    What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice.

    We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical judgment. Why does this happen? Because language models aim to be helpful within their given context, not necessarily to prioritize authoritative or safe advice. When a browser plug-in, a tainted PDF, or a retrieved web page contains hidden instructions, those can become the model’s new directive, undermining guardrails and safety layers.

    From direct “ignore previous instructions” overrides to obfuscated cues in code or emotionally charged context nudges, we map the many forms of this attack surface. We contrast these prompt injections with hallucinations, examine how alignment and preference training can unintentionally amplify risks, and highlight why current defenses, like content filters or system prompts, often fall short in clinical use.

    Then, we get practical. For AI developers: establish strict instruction boundaries, sanitize external inputs, enforce least-privilege access to tools, and prioritize adversarial testing in medical settings. For clinicians and patients: treat AI as a research companion, insist on credible sources, and always confirm drug advice with licensed professionals.

    AI in healthcare doesn’t need to be flawless, but it must be trustworthy. If you’re invested in digital health safety, this episode offers a clear-eyed look at where things can go wrong and how to build stronger, safer systems. If you found it valuable, follow the show, share it with a colleague, and leave a quick review to help others discover it.

    Reference:

    Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice
    Ro Woon Lee
    JAMA Open Health Informatics (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    25 mins
  • #24 - What Else Is Hiding In Medical Images?
    Dec 25 2025

    What if a routine mammogram could do more than screen for breast cancer? What if that same image could quietly reveal a woman’s future risk of heart disease—without extra tests, appointments, or burden on patients?

    In this episode, we explore a large-scale study that uses deep learning to uncover cardiovascular risk hidden inside standard breast imaging. By analyzing mammograms that millions of women already receive, researchers show how a single scan can deliver a powerful second insight for women’s health. Laura brings the clinical perspective, unpacking how cardiovascular risk actually shows up in practice—from atypical symptoms to prevention decisions—while Vasanth walks us through the AI system that makes this dual-purpose screening possible.

    We begin with the basics: how traditional cardiovascular risk tools like PREVENT work, what data they depend on, and why—despite their proven value—they’re often underused in real-world care. From there, we turn to the mammogram itself. Features such as breast arterial calcifications and subtle tissue patterns have long been linked to vascular disease, but this approach goes further. Instead of focusing on a handful of predefined markers, the model learns from the entire image combined with age, identifying patterns that humans might never think to look for.

    Under the hood is a survival modeling framework designed for clinical reality, where not every patient experiences an event during follow-up, yet every data point still matters. The takeaway is striking: the imaging-based risk score performs on par with established clinical tools. That means clinicians could flag cardiovascular risk during a test patients are already getting—opening the door to earlier conversations about blood pressure, cholesterol, diabetes, and lifestyle changes.

    We also zoom out to the bigger picture. If mammograms can double as heart-risk detectors, what other routine tests are carrying untapped signals? Retinal images, chest CTs, pathology slides—each may hold clues far beyond their original purpose. With careful validation and attention to bias, this kind of opportunistic screening could expand access to prevention and shift care further upstream.

    If this episode got you thinking, share it with a colleague, subscribe for more conversations at the intersection of AI and medicine, and leave a review telling us which everyday medical test you think deserves a second life.

    Reference:

    Predicting cardiovascular events from routine mammograms using machine learning
    Jennifer Yvonne Barraclough
    Heart (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/



    Show More Show Less
    24 mins
No reviews yet