Episodes

  • #26 - How Your Phone Keyboard Signals Your State Of Mind
    Jan 8 2026

    What if your keyboard could reveal your mental health? Emerging research suggests that how you type—not what you type—could signal early signs of depression. By analyzing keystroke patterns like speed, timing, pauses, and autocorrect use, researchers are exploring digital biomarkers that might quietly reflect changes in mood.

    In this episode, we break down how this passive tracking compares to traditional screening tools like the PHQ. While questionnaires offer valuable insight, they rely on memory and reflect isolated moments. In contrast, continuous keystroke monitoring captures real-world behaviors—faster typing, more pauses, shorter sessions, and increased autocorrect usage—all patterns linked to mood shifts, especially when anxiety overlaps with depression.

    We discuss the practical questions this raises: How do we account for personal baselines and confounding factors like time of day or age? What’s the difference between correlation and causation? And how can we design systems that protect privacy while still offering clinical value?

    From privacy-preserving on-device processing to broader behavioral signals like sleep and movement, this conversation explores how digital phenotyping might help detect depression earlier—and more gently. If you're curious about AI in healthcare, behavioral science, or the ethics of digital mental health tools, this episode lays out both the potential and the caution needed.

    Reference:

    Effects of mood and aging on keystroke dynamics metadata and their diurnal patterns in a large open-science sample: A BiAffect iOS study
    Claudia Vesel et al.
    J Am Med Inform Assoc (2020)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/


    Show More Show Less
    20 mins
  • #25 - When Safety Slips: Prompt Injection in Healthcare AI
    Jan 1 2026

    What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice.

    We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical judgment. Why does this happen? Because language models aim to be helpful within their given context, not necessarily to prioritize authoritative or safe advice. When a browser plug-in, a tainted PDF, or a retrieved web page contains hidden instructions, those can become the model’s new directive, undermining guardrails and safety layers.

    From direct “ignore previous instructions” overrides to obfuscated cues in code or emotionally charged context nudges, we map the many forms of this attack surface. We contrast these prompt injections with hallucinations, examine how alignment and preference training can unintentionally amplify risks, and highlight why current defenses, like content filters or system prompts, often fall short in clinical use.

    Then, we get practical. For AI developers: establish strict instruction boundaries, sanitize external inputs, enforce least-privilege access to tools, and prioritize adversarial testing in medical settings. For clinicians and patients: treat AI as a research companion, insist on credible sources, and always confirm drug advice with licensed professionals.

    AI in healthcare doesn’t need to be flawless, but it must be trustworthy. If you’re invested in digital health safety, this episode offers a clear-eyed look at where things can go wrong and how to build stronger, safer systems. If you found it valuable, follow the show, share it with a colleague, and leave a quick review to help others discover it.

    Reference:

    Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice
    Ro Woon Lee
    JAMA Open Health Informatics (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    25 mins
  • #24 - What Else Is Hiding In Medical Images?
    Dec 25 2025

    What if a routine mammogram could do more than screen for breast cancer? What if that same image could quietly reveal a woman’s future risk of heart disease—without extra tests, appointments, or burden on patients?

    In this episode, we explore a large-scale study that uses deep learning to uncover cardiovascular risk hidden inside standard breast imaging. By analyzing mammograms that millions of women already receive, researchers show how a single scan can deliver a powerful second insight for women’s health. Laura brings the clinical perspective, unpacking how cardiovascular risk actually shows up in practice—from atypical symptoms to prevention decisions—while Vasanth walks us through the AI system that makes this dual-purpose screening possible.

    We begin with the basics: how traditional cardiovascular risk tools like PREVENT work, what data they depend on, and why—despite their proven value—they’re often underused in real-world care. From there, we turn to the mammogram itself. Features such as breast arterial calcifications and subtle tissue patterns have long been linked to vascular disease, but this approach goes further. Instead of focusing on a handful of predefined markers, the model learns from the entire image combined with age, identifying patterns that humans might never think to look for.

    Under the hood is a survival modeling framework designed for clinical reality, where not every patient experiences an event during follow-up, yet every data point still matters. The takeaway is striking: the imaging-based risk score performs on par with established clinical tools. That means clinicians could flag cardiovascular risk during a test patients are already getting—opening the door to earlier conversations about blood pressure, cholesterol, diabetes, and lifestyle changes.

    We also zoom out to the bigger picture. If mammograms can double as heart-risk detectors, what other routine tests are carrying untapped signals? Retinal images, chest CTs, pathology slides—each may hold clues far beyond their original purpose. With careful validation and attention to bias, this kind of opportunistic screening could expand access to prevention and shift care further upstream.

    If this episode got you thinking, share it with a colleague, subscribe for more conversations at the intersection of AI and medicine, and leave a review telling us which everyday medical test you think deserves a second life.

    Reference:

    Predicting cardiovascular events from routine mammograms using machine learning
    Jennifer Yvonne Barraclough
    Heart (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/



    Show More Show Less
    24 mins
  • #23 - Designing Antivenom With Diffusion Models
    Dec 18 2025

    What if the future of antivenom didn’t come from horse serum, but from AI models that shape lifesaving proteins out of noise?

    In this episode, we explore how diffusion models, powerful tools from the world of AI, are transforming the design of antivenoms, particularly for some of nature’s deadliest neurotoxins. Traditional antivenom is costly, unstable, and can provoke serious immune reactions. But for toxins like those from cobras, mambas, and sea snakes that are potent yet hard to target with immune responses, new strategies are needed.

    We begin with the problem: clinicians face high-risk toxins and a shortage of effective, safe treatments. Then we dive into the breakthrough: using diffusion models like RosettaFold Diffusion to generate novel protein binders that precisely fit the structure of snake toxins. These models start with random shapes and iteratively refine them into stable, functional proteins, tailored to neutralize the threat at the molecular level.

    You’ll hear how these designs were screened for strength, specificity, and stability, and how the top candidates performed in mouse studies—protecting respiration and holding promise for more scalable, less reactive therapies. Beyond venom, this approach hints at a broader shift in drug development: one where AI accelerates discovery by reasoning in shape, not just sequence.

    We wrap by looking ahead at the challenges in manufacturing, regulation, and real-world validation, and why this shape-first design mindset could unlock new frontiers in precision medicine.

    If you’re into biotech with real-world impact, subscribe, share, and leave a review to help more curious listeners discover the show.

    Reference:

    Novel Proteins to Neutralize Venom Toxins
    José María Gutiérrez
    New England Journal of Medicine (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    21 mins
  • #22 - Hope, Help, and the Language We Choose
    Dec 11 2025

    What if the words we use could tip the balance between seeking help and staying silent? In this episode, we explore a fascinating study that compares top-voted Reddit responses with replies generated by large language models (LLMs) to uncover which better reduces stigma around opioid use disorder—and why that distinction matters.

    Drawing from Laura’s on-the-ground ER experience and Vasanth’s research on language and moderation, we examine how subtle shifts, like saying “addict” versus “person with OUD, ” can reshape beliefs, impact treatment, and even inform policy. The study zeroes in on three kinds of stigma: skepticism toward medications like Suboxone and methadone, biases against people with OUD, and doubts about the possibility of recovery.

    Surprisingly, even with minimal prompting, LLM responses often came across as more supportive, hopeful, and factually accurate. We walk through real examples where personal anecdotes, though well-intended, unintentionally reinforced harmful myths—while AI replies used precise, compassionate language to challenge stigma and foster trust.

    But this isn’t a story about AI hype. It’s about how moderation works in online communities, why tone and pronouns matter, and how transparency is key. The takeaway? Language is infrastructure. With thoughtful design and human oversight, AI can help create safer digital spaces, lower barriers to care, and make it easier for people to ask for help, without fear.

    If this conversation sparks something for you, follow the show, share it with someone who cares about public health or ethical tech, and leave us a review. Your voice shapes this space: what kind of language do you want to see more of?

    Reference:

    Exposure to content written by large language models can reduce stigma around opioid use disorder
    Shravika Mittal et al.
    npj Artificial Intelligence (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    25 mins
  • #21 - The Rural Reality Check for AI
    Dec 4 2025

    How can AI-powered care truly serve rural communities? It’s not just about the latest tech, it’s about what works in places where internet can drop, distances are long, and people often underplay symptoms to avoid making a fuss.

    In this episode, we explore what it takes for AI in healthcare to earn trust and deliver real value beyond city limits. From wearables that miss the mark on weak broadband to triage tools that misjudge urgency, we reveal how well-meaning innovations can falter in rural settings. Through four key use cases—predictive monitoring, triage, conversational support, and caregiver assistance—we examine the subtle ways systems fail: false positives, alarm fatigue, and models trained on data that doesn’t reflect rural realities.

    But it’s not just a tech problem—it’s a people story. We highlight the importance of offline-first designs, region-specific audits, and data that mirrors local language and norms. When AI tools are built with communities in mind, they don’t just alert—they support. Nurses can follow up. Caregivers can act. Patients can trust the system.

    With the right approach, AI won’t replace relationships—it’ll reinforce them. And when local teams, family members, and clinicians are all on the same page, care doesn’t just reach further. It gets better.

    Subscribe for more grounded conversations on health, AI, and care that works. And if this episode resonated, share it with someone building tech for real people—and leave a review to help others find the show.

    Reference:

    From Bandwidth to Bedside — Bringing AI-Enabled Care to Rural America
    Angelo E. Volandes et al.
    New England Journal of Medicine (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    20 mins
  • #20 - Google Translate Walked Into An ER And Got A Reality Check
    Nov 27 2025

    What if your discharge instructions were written in a language you couldn’t read? For millions of patients, that’s not a hypothetical, but a safety risk. And at 2 a.m. in a busy hospital, translation isn’t just a convenience; it’s clinical care.

    In this episode, we explore how AI can bridge the language gap in discharge instructions: what it does well, where it stumbles, and how to build workflows that support clinicians without slowing them down. We unpack what these instructions really include: condition education, medication details, warning signs, and follow-up steps, all of which need to be clear, accurate, and culturally appropriate.

    We trace the evolution of translation tools, from early rule-based systems to today’s large language models (LLMs), unpacking the transformer breakthrough that made flexible, context-aware translation possible. While small, domain-specific models offer speed and predictability, LLMs excel at simplifying jargon and adjusting tone. But they bring risks like hallucinations and slower response times.

    A recent study adds a real-world perspective by comparing human and AI translations across Spanish, Chinese, Somali, and Vietnamese. The takeaway? Quality tracks with data availability: strongest for high-resource languages like Spanish, and weaker where training data is sparse. We also explore critical nuances that AI may miss: cultural context, politeness norms, and the role of family in decision-making.

    So what’s working now? A hybrid approach. Think pre-approved multilingual instruction libraries, AI models tuned for clinical language, and human oversight to ensure clarity, completeness, and cultural fit. For rare languages or off-hours, AI support with clear thresholds for interpreter review can extend access while maintaining safety.

    If this topic hits home, follow the show, share with a colleague, and leave a review with your biggest question about AI and clinical communication. Your insights help shape safer, smarter care for everyone.

    Reference:

    Accuracy of Artificial Intelligence vs Professionally Translated Discharge Instructions
    Melissa Martos, et al.
    JAMA Network Open (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    32 mins
  • #19 - AI That Tames Your Health Data Deluge
    Nov 20 2025

    What if your health data spoke in one calm voice instead of twenty buzzing ones? In this episode, we explore an AI “interpreter layer” that turns step counts, sleep stages, and alerts into fewer, smarter signals that nudge real behavior—without the anxiety spiral. Vasanth (AI researcher and cognitive scientist) and Laura (emergency physician) bring lab insight and frontline reality to a problem most dashboards ignore: humans have limited working memory, serial attention, and a knack for missing rare but important events. More data isn’t always better; often, it’s just louder.

    So what does “useful” look like? Clear summaries in plain language. Patterns stitched across streams—workouts linked to calmer moods, dinner timing tied to glucose swings. Personal baselines that ditch one-size-fits-all thresholds. Instead of a raw feed, imagine a tight weekly brief that surfaces the top two trends, why they matter, and one small experiment to try—aligned with your clinician. That’s the shift from charts to choices.

    Trust and safety stay center stage. We unpack sensor accuracy, false arrhythmia flags, and the risk of AI hallucinations. The answer isn’t blind automation; it’s human-in-the-loop oversight, transparent provenance, and user controls to set goals, define “normal,” and mute the rest. We also show how primary care can ingest concise, standardized summaries instead of five pages of logs—making visits more focused and collaborative.

    If you’re ready to trade a 24/7 body ticker for meaningful insights you can act on, this conversation offers a realistic blueprint. Subscribe, share with a friend drowning in metrics, and leave a review telling us the one metric you actually use—and the one you’d happily hide.

    Reference:

    Do we need AI guardians to protect us from health information overload?
    Arjun Mahajan and Stephen Gilbert
    npj Digital Medicine (2025)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Show More Show Less
    21 mins