• Why Healthcare AI Fails Without Complete Medical Records: Interoperability, Transparency & Patient Access
    Jan 21 2026

    Send us a text

    Healthcare AI cannot deliver precision medicine without complete, interoperable medical records, which are essential for responsible AI implementation in healthcare. In this episode, recorded live at the Data First Conference in Las Vegas, Aleida Lanza, founder and CEO of Casedok, shares insights from her 35 years as a medical malpractice paralegal on why fragmented records and inaccessible data continue to undermine care quality, safety, and trust in healthcare AI.

    We dive deep into why interoperability must extend beyond the core clinical record to include the full spectrum of healthcare data—images, itemized bills, claims history, and even records trapped in paper or PDFs. Aleida argues that patient ownership and transparency of their health information, a critical element of healthcare ethics, are key to overcoming these challenges and enabling ethical leadership in healthcare AI.

    This episode also highlights the significant risks posed by missing data bias in healthcare AI, explaining how incomplete records prevent AI systems from accurately detecting patient needs. Aleida outlines how complete medical record transparency and safe AI collaboration can transform healthcare from static averages to truly personalized, informed care, aligning with principles of ethical AI and responsible AI deployment.

    If you're involved in healthcare leadership, AI strategy, data governance, or healthcare ethics, this episode offers valuable perspectives on AI readiness, healthcare AI regulation, and the urgent need to improve interoperability for better patient outcomes.

    Key topics covered

    • Why interoperability must include the entire medical record
    • Patient ownership, transparency, and access to health data
    • The hidden cost of fragmented records and repeated history-taking
    • Why static averages fail patients and clinicians
    • Precision medicine vs static medicine
    • Safe AI deployment without hallucination or data leakage
    • Missing data as the most dangerous bias in healthcare AI
    • Emergency access to complete history as a patient safety issue
    • Medicare, payer integration, and large-scale access challenges

    Chapters

    00:00 Live from Data First Conference
    01:20 Why interoperability is more than clinical data
    03:40 Fragmentation, static medicine, and broken incentives
    05:55 Why AI needs complete patient history
    08:10 Missing data as invisible bias
    10:55 Emergency care and inaccessible records
    12:40 Patient ownership and transparency
    14:30 Precision medicine and AI safety
    16:10 Why patients should own what they paid for
    18:30 How to connect with Aleida Lanza

    Stay tuned. Stay curious. Stay human.

    #HealthcareAI #Interoperability #PatientData

    Support the show

    Show More Show Less
    16 mins
  • AI Ethics & Ethical Leadership in Healthcare: Building Trust Without Losing Humanity
    Jan 14 2026

    Send us a text

    Recorded live at the Put Data First AI conference in Hollywood, Las Vegas, this episode of The Signal Room features a deep conversation between Chris Hutchins and Asha Mahesh, an expert in AI ethics, ethical leadership, and responsible data use in healthcare. The discussion goes beyond hype to examine what it truly means to humanize AI for care and build trust through ethical leadership and sound AI strategy.

    Asha shares her personal journey into ethics and technology, shaped by lifelong proximity to healthcare and a commitment to ensuring innovation serves patients, clinicians, and communities. Together, they explore how ethical AI in healthcare is not just a policy document, but a way of working embedded into culture, incentives, and daily decision-making.

    Key themes include building trust amid skepticism, addressing fears of job displacement, and reframing AI adoption through a 'what's in it for you' lens. Real-world examples from COVID vaccine development show how AI, guided by purpose and urgency, can accelerate clinical trials without sacrificing responsibility.

    The conversation also discusses human-in-the-loop systems, the irreplaceable roles of empathy and judgment, and the importance of transparency and humility in healthcare leadership. This episode is essential listening for healthcare leaders, life sciences professionals, and AI practitioners navigating the ethical crossroads of trust and innovation.


    Chapters with Keyword-Rich Descriptions

    00:00 – Live from Put Data First: Why AI Ethics Matters in Healthcare
    Chris Hutchins opens the conversation live from the Put Data First AI conference in Las Vegas, framing why ethics, privacy, and trust are amplified challenges in healthcare and life sciences.

    01:05 – Asha’s Path into AI Ethics, Privacy, and Life Sciences
    Asha shares her personal journey into healthcare technology, data, and AI ethics, shaped by early exposure to hospitals, science, and real-world impact.

    03:00 – Human Impact as the North Star for Healthcare AI
    Why improving patient outcomes, not technology novelty, must guide AI strategy, data science, and innovation decisions in healthcare.

    04:30 – Humanizing AI for Care: Purpose Before Technology
    A discussion on what “human-centered AI” really means and how intention and intended use define whether AI helps or harms.

    06:20 – Embedding Ethics into Culture, Not Policy Documents
    Why ethical AI is not a checklist or white paper, but a set of behaviors, incentives, and ways of working embedded into organizational culture.

    07:55 – COVID Vaccine Development: AI Done Right
    A real-world example of how data, machine learning, and predictive models accelerated clinical trials during the pandemic while maintaining responsibility.

    10:15 – Mission Over Technology: Lessons from the Pandemic
    How urgency, shared purpose, and collaboration unlocked innovation faster than tools alone, and why that mindset should not require a crisis.

    12:20 – The Erosion of Trust in Institutions and Technology
    Chris reflects on declining trust in government, healthcare, and technology, and why AI leaders must now operate from a trust deficit.

    14:10 – Fear and AI: Addressing Job Loss Concerns
    A practical conversation on why fear of AI replacing jobs persists and how leaders can reframe AI as support, not replacement.

    16:30 – “What’s In It for You?” A Human-Centered Adoption Framework
    How focusing on individual value, workflow relief, and personal benefit increases trust and adoption of AI tools in healthcare and life sciences.

    18:00 – How Human Should AI Be?

    Support the show

    Show More Show Less
    22 mins
  • Why Healthcare Isn’t Ready for AI Yet | Emotional Readiness, Just Culture & Leadership Trust
    Jan 7 2026

    Send us a text

    Healthcare can’t be technologically ready for AI until it’s emotionally ready first.

    In this episode of The Signal Room, host Chris Hutchins sits down with Susie Brannigan — a trauma-informed nurse executive, Just Culture leader, and AI ethics advocate — to explore the human readiness gap in healthcare transformation.

    Susie explains why trust must be rebuilt before new systems (Epic, AI, automation) can succeed, and how leaders can shift culture from blame to learning, from burnout to belonging. Drawing from real unit experience and frontline realities, she breaks down what emotionally safe leadership looks like during implementation, why “pilot” language often erodes credibility, and how Just Culture + trauma-informed leadership create the psychological safety required for change.

    We also discuss where AI can genuinely help clinicians (and where it can go too far), including guardrails for empathy, presence, and patient-facing AI interactions. If you’re leading digital transformation, managing workforce fatigue, or trying to implement AI without losing your people, this conversation is a practical guide.

    Key topics covered

    • The human readiness gap: emotional readiness before technological readiness
    • Trust erosion in healthcare leadership and why it blocks adoption
    • Epic implementation lessons: skill gaps, overtime, and unit-level support
    • What Just Culture is and how it reduces fear and turnover
    • Trauma-informed leadership and psychological safety on high-acuity units
    • Emotional intelligence alongside data literacy as a core leadership skill
    • Designing AI with empathy, guardrails, and clinical accountability
    • Practical advice for leaders: rounding with purpose, supporting staff, choosing sub-leaders

    Chapters

    00:00 Emotional readiness and the human readiness gap
    01:10 Why implementations fail without trust
    07:20 Epic vs AI: why this shift feels different
    09:10 What Just Culture is and why it works
    11:20 Trauma-informed leadership and secondary trauma
    19:40 Emotional intelligence in tech-driven environments
    22:10 AI, empathy, and guardrails for patient-facing tools
    29:30 Coaching and simulation: preparing nurses for crisis care
    34:40 Leadership advice for AI-era change
    38:20 How to connect with Susie Brannigan
    42:10 Closing

    Connect with Susie Brannigan

    • LinkedIn: Susie Brannigan
    • Business page: Susie Brannigan Consulting
      (Susie shares culture assessments, Just Culture training, trauma-informed training, and leadership support across healthcare and other industries.)

    If this episode resonated, share it with a leader who’s trying to implement change without losing trust. The future of healthcare transformation depends on psychological safety.

    Stay curious. Stay human.

    #JustCulture #HealthcareLeadership #AIinHealthcare

    Support the show

    Show More Show Less
    38 mins
  • The Hidden Infrastructure of Trust: Why You Can't Scale AI Without Scaling Trust
    Jan 7 2026

    Send us a text

    In this insightful episode of The Signal Room, Chris Hutchins sits down with Amit Shivpuja, Director of Data and AI Enablement at Walmart, to delve into the critical role of ethical leadership and responsible AI in building trust for successful AI adoption. Recorded live at the Put Data First Conference in Las Vegas, they discuss why trust forms the foundation for scaling AI technologies effectively.

    Amit emphasizes that AI can only reach its potential when supported by trustworthy data, explaining that 'garbage in is garbage squared out,' highlighting the risks of bias amplification. The conversation covers how early stakeholder involvement and human oversight are essential components of responsible AI strategies. Amit also addresses workforce concerns by advocating transparent communication about AI's impact and upskilling.

    Listeners will gain valuable insights into how healthcare leadership, AI ethics, and AI readiness intersect to create scalable, trustworthy AI systems. This episode is a must-listen for leaders and innovators aiming to integrate ethical AI principles into their business strategy.

    Connect with Amit on LinkedIn, explore his book 'The Data and AI Compass,' and follow his Substack for deeper insights into AI governance and data strategy.

    Support the show

    Show More Show Less
    15 mins
  • Cybersecurity + AI Risk + Workforce Transformation
    Dec 31 2025

    Send us a text

    As AI adoption accelerates, Cybersecurity risk is evolving faster than most organizations are prepared for.

    In this episode of The Signal Room, host Chris Hutchins is joined by Anita Mareedu, a network security engineer at Cadence, to explore how artificial intelligence is reshaping Cybersecurity, national security, and workforce readiness.

    Recorded at the Data First Conference in Las Vegas, this conversation dives into real-world cybersecurity challenges across industries, including healthcare, finance, government, and semiconductor design. Anita shares her journey from electrical engineering and VLSI research into network and application security, offering a grounded perspective on how AI both accelerates productivity and expands the attack surface.

    We discuss why AI systems must be secured as aggressively as they are deployed, how the CIA triad (confidentiality, integrity, availability) applies differently across industries, and what leaders need to understand about data exposure, access controls, and compliance as AI becomes embedded into everyday workflows.

    This episode is especially relevant for leaders navigating AI adoption, cybersecurity professionals managing expanding risk, and students or early-career professionals wondering how to prepare for an AI-driven future of work.

    Key topics covered

    • How AI is changing cybersecurity and network security
    • AI risk management and agentic AI concerns
    • National security implications of AI and data exposure
    • Industry-specific security priorities: healthcare, finance, government
    • Application security, API security, and cloud environments
    • Compliance frameworks including NIST, HIPAA, and SOC
    • Workforce transformation and career pathways in cybersecurity
    • How to learn, adapt, and stay relevant as technology evolves

    Chapters

    00:00 AI, cybersecurity, and why risk is accelerating
    02:50 From VLSI to network security: a career journey
    07:40 Application security, APIs, and modern attack surfaces
    12:30 AI productivity vs. AI security risk
    15:20 National security, data exposure, and sensitive information
    18:10 Industry differences: healthcare, finance, government
    21:00 Workforce fear, job security, and AI transformation
    24:00 Advice for students and early-career professionals
    27:00 Where to start learning cybersecurity today
    30:00 Closing reflections

    If this conversation resonated, share it with a colleague working in AI, Cybersecurity, or technology leadership. AI is here to stay. How we secure it will shape everything that comes next.

    Stay curious. Stay human.

    Support the show

    Show More Show Less
    19 mins
  • Authentic Intelligence in Healthcare AI | Context, Explainability, Bias & Human-in-the-Loop Design
    Dec 24 2025

    Send us a text

    “Authentic intelligence” is not just smarter AI. It’s AI that behaves closer to human reasoning by understanding context, recognizing limits, and supporting human judgment.

    Recorded live at the Data First Conference in Las Vegas, this episode of The Signal Room features Keshavon Shashari, Senior Machine Learning Engineer at Prudential Financial, in a practical conversation about what it takes to design AI systems that can be trusted in high-stakes environments like healthcare.

    We explore why context is everything in clinical and administrative workflows, and why general-purpose large language models should not be treated like physicians. Keshavon breaks down four critical categories of healthcare context (patient, task, human availability, and institutional/regulatory requirements), and explains how modern AI systems should include confidence thresholds, risk-aware checkpoints, guardrails, and evaluation frameworks so humans stay in the loop—especially for diagnosis, surgery, and other regulated decisions.

    We also dive into explainability and transparency: how logging, tool tracing, and agent-level reasoning can make AI actions auditable, and how feedback loops (including reinforcement learning from human feedback) can reduce bias over time.

    If you are building healthcare AI, leading data/AI strategy, or evaluating clinical AI solutions, this episode provides a clear framework for designing systems that are safer, more explainable, and more context-aware.

    Key topics covered

    • What “authentic intelligence” means vs artificial intelligence
    • Why context is everything in healthcare AI
    • Four types of context: patient, task, human availability, and institutional/regulatory context
    • Why general-purpose LLMs are not “doctors”
    • Human-in-the-loop design: confidence thresholds and risk-aware deferral
    • Guardrails, eval sets, testing mechanisms, and compliance considerations
    • Explainability and transparency: logging, tool tracing, and agent reasoning
    • Bias, data quality, and reinforcement learning from human feedback
    • How to prevent “technically correct, contextually wrong” outcomes

    Chapters

    00:00 Authentic intelligence and context awareness
    00:45 Live from Data First Conference (Las Vegas)
    02:20 What authentic intelligence means in practice
    03:30 Four types of context in healthcare AI
    06:25 Training, fine-tuning, and context engineering
    08:15 Specialty workflows and domain-specific models
    09:50 Why AI is not a doctor (yet)
    12:00 Confidence scores, risk, and human deferral
    15:05 Bias, explainability, and transparency requirements
    18:00 Logging, tool tracing, and auditability
    20:10 Technically correct but contextually wrong examples
    24:20 What builders should focus on now
    26:20 Guardrails, evals, and regulated environments
    27:10 How to reach Keshavon

    Stay tuned. Stay curious. Stay human.

    #HealthcareAI #ResponsibleAI #ExplainableAI

    Support the show

    Show More Show Less
    28 mins
  • AI Can’t Replace Language Access | Patient Safety, Trust, and Ethical Communication in Healthcare
    Dec 17 2025

    Send us a text

    Language access is not an administrative task. It is patient safety.

    In this episode of The Signal Room, Chris Hutchins is joined by Carol Velandia, founder of Equal Access Language Services, to explore why communication is the most powerful diagnostic tool in healthcare — and why language barriers quietly undermine safety, trust, compliance, and outcomes.

    Carol makes language inequity visible using a simple comparison: ramps vs. staircases. Staircases aren’t “wrong,” but they fail people who need ramps. In the same way, when English becomes the only language of care, patients with limited English proficiency are effectively asked to climb invisible stairs — through consent forms, diagnosis, discharge instructions, and every critical moment in care.

    We also discuss the role of AI in translation and interpretation: where it can expand access and speed, where it can increase risk, and why artificial intelligence is not moral intelligence. Carol outlines why ethics, empathy, and rapport cannot be outsourced to machines, and why the best path forward is a human + AI partnership with strong standards and accountability.

    If you work in healthcare leadership, clinical operations, patient experience, risk, compliance, or AI strategy, this episode is a practical framework for designing equitable communication as infrastructure — not an afterthought.

    Key topics covered

    • Communication as the most important diagnostic tool
    • Language access as patient safety and civil rights
    • The ramp vs. staircase framework for inequity
    • Why language barriers increase errors and undermine trust
    • AI in translation: speed vs accuracy, and why humans must stay in the loop
    • Ethical communication, interpreter ethics, and professional codes of ethics
    • “Inclusion is infrastructure”: designing language access into systems from day one
    • Practical steps for healthcare organizations building language access programs

    Chapters

    00:00 Language access as ramps vs staircases
    00:45 Why this matters in an AI era
    03:10 Trust, readiness, and frontline communication
    05:10 Communication is the most powerful diagnostic tool
    08:20 Civil rights, policy risk, and meaningful access
    12:10 AI won’t replace ethics, empathy, or rapport
    15:00 Human-in-the-loop translation and post-editing
    17:40 Artificial intelligence is not moral intelligence
    21:45 Inclusion as infrastructure, not an afterthought
    24:10 Making language access visible as patient safety
    26:20 “Language access is the bridge between compliance and compassion”
    27:55 How to reach Carol and learn more

    Connect with Carol Velandia

    Email: carolvelandia@equalaccesslanguageservices.com

    Podcast: Language Access Matters
    Course: Effective Inclusion through Language Access (covers implementation + AI)

    If this episode resonated, share it with a leader working on AI, patient safety, or workforce transformation. Healthcare can’t be human-centered if patients can’t understand — or be understood.

    Stay tuned. Stay curious. Stay human.

    #LanguageAccess #PatientSafety #HealthcareAI

    Support the show

    Show More Show Less
    31 mins
  • Garbage In, Gen AI Out: Data Quality and Healthcare AI Challenges
    Dec 3 2025

    Send us a text

    In this conversation, Danette McGilvray and Chris Hutchins discuss the ongoing challenges organizations face regarding the emphasis on data versus technology. They highlight the significant financial investments made in technology while often neglecting the data that drives it. The discussion also touches on the organic growth of systems and the need for a better understanding of data management.

    Support the show

    Show More Show Less
    46 mins