Just 250 Malicious Samples Can Poison AI Models - John Bruggerman's inside briefing on AI BrainRot - EP 20 cover art

Just 250 Malicious Samples Can Poison AI Models - John Bruggerman's inside briefing on AI BrainRot - EP 20

Just 250 Malicious Samples Can Poison AI Models - John Bruggerman's inside briefing on AI BrainRot - EP 20

Listen for free

View show details

About this listen

In this episode of the Ransomware Rewind podcast, host Joe Erle (@joe_erle) interviews John Bruggeman, Chief Information Security Officer (CISO) at CBTS and OnX, on emerging cybersecurity threats like AI model poisoning and prompt injection attacks. With over 25 years of experience in cybersecurity, John explains how unsanitized inputs and as few as 250 malicious data points can cause "brain rot" or model decay in large language models (LLMs), resulting in unreliable outputs, hidden backdoors, and long-term AI vulnerabilities. John explains real-world AI attack vectors, including tool poisoning through hidden HTML code in emails, agent session smuggling in enterprise tools like Microsoft Copilot, and remote code execution risks that enable data exfiltration or excessive resource consumption. The discussion also covers recent DNS outages at Microsoft and AWS, illustrating how critical infrastructure weaknesses exacerbate AI security risks. John shares practical cybersecurity best practices for protecting AI systems: always sanitize inputs, enforce human-in-the-loop oversight, keep clean backups for model recovery, and integrate ethical guardrails inspired by Isaac Asimov's laws of robotics. They explore ethical concerns in AI, such as Reddit-driven misinformation campaigns, AI's psychological impact on vulnerable users like teenagers, and why LLMs aren't truly sentient (they're just advanced next-word predictors). Plus, a lively debate on AI's future: utopian Star Trek scenarios vs. dystopian Skynet dangers. Packed with actionable insights on AI security, data poisoning prevention, and cybersecurity strategies, this episode is a must-listen for CISOs, IT leaders, security professionals, and businesses deploying AI in high-risk environments. Tune in to Ransomware Rewind for expert advice on safeguarding your AI models, preventing prompt injection, and staying ahead of cyber threats. Available now. Listen on your favorite podcast platform! Episode Chapters — Key Moments 00:00 First Leak — Prompt attacks begin 02:00 Breaches & Insurance — Who pays when it breaks 05:30 Human Error — Why people cause most damage 10:00 Model Decay — When systems slowly forget 15:30 Training Data Risk — Bad data, bad outcomes 22:00 LLM Attacks — Hackers follow the spotlight 30:00 Red Teaming — Break it before they do 38:00 Guardrails — Rules that keep speed safe 46:00 Startups — Small teams, big targets 55:00 The Future — What keeps CISOs awake Guest: John Bruggeman, Chief Information Security Officer at CBTS and OnX LinkedIn: / johnbruggeman Website: http://www.huc.edu/ Host: Joe Erle, Cyber Group Practice Leader at C3 Insurance LinkedIn: / joeerle X: https://x.com/joe_erle TikTok: / itscyberjoe Instagram: / itscyberjoe Facebook: / joeerle Mike Dowdy LinkedIn: / mikedowdy Listen on Apple Music, Spotify, and YouTube. Thanks for listening and don't forget to follow the pod and leave a review.
No reviews yet