Episodes

  • Episode 24: Voice AI Under Attack: Hackers Exploit AI Call Agents | Traffic Light Protocol Podcast
    Sep 16 2025

    Send us a text

    Voice AI is moving fast — but so are the attackers.

    In this episode of the Traffic Light Protocol Podcast, Clint and Myles break down how scammers are exploiting Voice AI platforms with the same tricks that wrecked email and telecom decades ago:

    • Premium-rate fraud dressed up in AI clothing
    • Bot-driven spam that floods calendars and burns ops teams
    • Consent loopholes where “user input” becomes an attacker’s best weapon

    This isn’t FUD. It’s happening right now, and the industry risks walking into the same “secure it later” trap we’ve seen before.

    We dig into why this matters for anyone deploying AI into customer-facing systems, what patterns connect it to broader cybercrime trends, and the hard questions leaders should be asking before they put an AI agent on the phone network.

    If you care about AI, fraud, and the future of secure automation then this one’s for you.

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    55 mins
  • Episode 23:AI Voice Agent Security: Voice AI Under Siege: SIP Spoofing, Cost Drain, and How to Fight Back
    Sep 5 2025

    Send us a text

    In this episode of Traffic Light Protocol, we kick off our AI series with a hard look at how voice AI agents are being targeted; and how fast small businesses and startups can rack up serious bills overnight.

    Guest Myles Agnew returns to unpack how old-school telecom tricks are being repurposed in the age of SIP/VoIP and AI: caller ID spoofing, open SIP trunks, and automated call loops that tie up your agents and quietly burn cash. We break down how easy it is to spin up a low-cost PABX, why authentication is weak in SIP land, and what practical controls you can turn on today to reduce fraud and noise.

    What we cover:

    • How SIP (Session Initiation Protocol) is abused to hit voice AI agents
    • Why caller ID “verification” often isn’t verification at all
    • The $5–$10/month attacker vs. your $/minute billing problem
    • Channels/lines, trunk limits, and how attackers amplify cost
    • Geo-fencing, call gating, and rate limits that actually help
    • “Stop loss” ideas for web and voice agents
    • How provider security maturity (and defaults) drives your risk
    • Where laws and policies are heading (AU, US) and what to watch

    If you’re building or buying voice AI, this is a must-listen before you scale.

    Free course (limited time): The AI Cybersecurity Starter Pack


    Get practical checklists, templates (incident response, HIPAA/GDPR/APPs), and step-by-step hardening for AI apps and AI voice agents.

    Join the Skool community and learn how to protect your voice AI from abuse.

    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    34 mins
  • Episode 22:AI Chat Forensics: How to Find, Investigate, and Analyse Evidence from ChatGPT, Claude & Gemini
    Jun 22 2025

    Send us a text

    Unlock the secrets behind digital forensic investigations into AI chat platforms like ChatGPT, Claude, and Google's Gemini in this insightful episode. Learn the precise methods for discovering, extracting, and interpreting digital evidence across Windows, Mac, and Linux environments, whether it's browser caches, memory forensics, network logs, or cloud-based data exports.

    From identifying subtle signs of malicious AI usage and attempts to evade security controls, to piecing together forensic timelines, this podcast provides practical, hands-on guidance tailored for cybersecurity professionals, forensic analysts, and IT investigators. Tune in now and boost your expertise in this emerging field of AI-driven digital forensics.

    You'll learn:

    AI Chat Evidence Locations
    Discover exactly where to find critical forensic evidence from ChatGPT, Claude, and Gemini across Windows, Mac, and Linux systems.

    Extracting and Analyzing Chat Data
    Learn practical techniques to extract, review, and interpret digital artifacts, including browser caches, local storage, memory dumps, and network logs.

    Identifying AI Jailbreaking and Misuse
    Understand how to spot attempts to bypass AI guardrails and recognize malicious prompts or suspicious activity within chat logs.

    Cloud vs Local Forensic Challenges
    Explore unique challenges associated with investigating cloud-based AI platforms versus local installations, and how to overcome them.

    Building Effective Forensic Timelines
    Master the art of assembling comprehensive forensic timelines by integrating timestamps, metadata, network traffic, and other key sources of digital evidence.


    Links and references

    https://help.openai.com/en/articles/7260999-how-do-i-export-my-chatgpt-history-and-data

    https://pvieito.com/2024/07/chatgpt-unprotected-conversations

    https://www.scribd.com/document/818273058/Conversational-AI-forensics#:~:text=of%20Gemini%20are%20stored%20in,based%20mobile%20app

    https://ar5iv.labs.arxiv.org/html/2505.23938v1#:~:text=source%20for%20corroborating%20evidence,of%20the%20NationalSecureBank%20phishing%20email

    aletheia.medium.com

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    42 mins
  • Episode 21: How IRCO is Changing DFIR: The AI Copilot for Real-Time Cyber Investigations
    Jun 10 2025

    Send us a text

    Link to IRCO- Incident Response Copilot on Chat GPT

    https://chatgpt.com/g/g-68033ce1b26481919b26df0737241bac-irco-incident-response-co-pilot

    In this episode of TLP: The Digital Forensics Podcast, Clint dives deep into IRCO (a custom GPT designed specifically for DFIR and SOC analysts). From real-world cyber incidents to post-incident reporting and CTF training, IRCO acts like your AI-powered colleague: fast, focused, and built for real investigations or even CTF's.

    Learn how this tool understands your forensic workflows, decodes technical jargon, and supports smarter, faster investigations. Clint shares how to start using IRCO, common use cases, how to keep your data safe, and why many in the field are underestimating its capability.

    Whether you're writing reports, analyzing logs, or stuck mid-incident, IRCO can give you the 1% edge you need to solve tricky DFIR investigations and communicate reports more quickly.

    🔍 Topics covered:
    – What is IRCO?
    – How to integrate AI into digital forensics workflows
    – Using IRCO for live incidents, CTFs, and training
    – Privacy and responsible AI use in SOC environments
    – Actionable prompts and use cases

    🎧 Subscribe to TLP now and give IRCO a test run. You might just find your new secret weapon in responding to incidents quicker than ever.

    https://chatgpt.com/g/g-68033ce1b26481919b26df0737241bac-irco-incident-response-co-pilot

    Show More Show Less
    16 mins
  • Episode 20:What Makes an Elite Incident Response Team: Mindset, Mastery, and Real-World DFIR Lessons
    Jun 4 2025

    Send us a text

    Drawing inspiration from observing military special forces and over five years of hands-on DFIR experience, Clint explores the mindset, habits, and tactical processes that set top-performing IR teams apart. Clint Marsden explores the mindset, habits, and tactical processes that set top-performing IR teams apart.

    From threat intelligence workflows and detection-first thinking to deep forensic analysis and clear executive reporting, this episode is packed with real-world lessons, relatable stories, and practical advice. Whether you're running your first threat hunt or leading an enterprise SOC, you'll walk away with a clearer vision for building a resilient, high-performing IR capability.

    You’ll learn:

    • Why elite IR teams focus on boring repetition and clarity over cool tools
    • How to track threat groups and adapt detection rules in real time
    • Where most SOCs fail with SIEM tuning and memory forensics
    • How to communicate findings that actually move leadership to act

    Check out the blog: www.dfirinsights.com

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    39 mins
  • Episode 19: AI Data Poisoning: How Bad Actors Corrupt Machine Learning Systems for Under $60
    May 26 2025

    Send us a text

    Clint Marsden breaks down a critical cybersecurity report from intelligence agencies including the CSA, NSA, and FBI about the growing threat of AI data poisoning. Learn how malicious actors can hijack AI systems for as little as $60, turning machine learning models against their intended purpose by corrupting training data.

    Clint explains the technical concept of data poisoning in accessible terms, comparing it to teaching a child the wrong labels for objects. He walks through the six-stage framework where AI systems become vulnerable, from initial design to production deployment, and covers the ten security recommendations intelligence agencies are now promoting to defend against these attacks.

    The episode explores real-world examples of AI systems gone wrong, from shopping bots buying drugs on the dark web to coordinated attacks by online communities. You'll discover practical mitigation strategies including cryptographic verification, secure data storage, anomaly detection, and the importance of "human in the loop" safeguards.

    Whether you're a cybersecurity professional, AI developer, or simply curious about emerging digital threats, this episode provides essential insights into protecting AI systems from manipulation and understanding why data integrity has become a national security concern.

    Key Topics Covered:

    • Split view poisoning and expired domain attacks
    • Data sanitization and anomaly detection techniques
    • Zero trust principles for AI infrastructure
    • The role of adversarial machine learning in cybersecurity
    • Why defenders must learn AI as quickly as attackers

      The PDF from CISA etc al: https://www.ic3.gov/CSA/2025/250522.pdf

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    27 mins
  • Audiobook - Mastering Sysmon. Deploying, Configuring, and Tuning in 10 easy steps
    Feb 28 2025

    Send us a text

    This episode features the complete narration of my ebook: Mastering Sysmon – Deploying, Configuring, and Tuning in 10 Easy Steps, providing a step-by-step guide to getting Sysmon up and running for better threat detection and incident response.

    If you’re in security operations, digital forensics, or incident response, this episode will help you:

    • Deploy Sysmon efficiently.
    • Tune Sysmon logs for maximum insight while reducing noise.
    • Use Sysmon for investigations—from process creation tracking to network monitoring.
    • Understand real-world use cases of how Sysmon can catch adversaries in action.

    Key Topics Covered:

    • Why Sysmon Matters – A deep dive into how Sysmon enhances Windows logging.
    • Common Mistakes & How to Avoid Them – Logging misconfigurations, tuning issues, and evidence handling best practices.
    • Step-by-Step Deployment Guide – From downloading Sysmon to configuring it for lean detections.
    • Tuning for Performance & Relevance – How to tweak Sysmon settings to avoid excessive log volume.
    • Investigating Security Events – Key Sysmon event IDs that provide forensic gold.
    • Real-World Use Cases – Examples of how Sysmon has caught attackers in action.
    • Sysmon Bypass Techniques – How adversaries evade detection and how to stay ahead.

    Resources Mentioned:

    1. Sysmon Download – Microsoft Sysinternals
    2. Sysmon Configuration Files – Olaf Hartong’s Sysmon-Modular
    3. MITRE ATT&CK Framework – MITRE ATT&CK
    4. ACSC Sysmon Config Guide – ACSC GitHub

    Key Takeaways:

    • Sysmon provides deep system visibility – if tuned correctly.
    • Tuning is essential – Avoid log overload while keeping useful data.
    • Use a structured deployment process – From baselining performance to verifying logs.
    • Sysmon alone isn’t enough – It works best when combined with other detection tools.
    • Be aware of bypass techniques – Attackers can disable Sysmon, so defense in depth is key.

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    44 mins
  • Episode 17 - Building a CTF
    Feb 27 2025

    Send us a text

    So You Want to Build Your Own DFIR CTF?

    Ever wanted to build your own Digital Forensics and Incident Response (DFIR) Capture the Flag (CTF) challenge but weren’t sure where to start? In this episode of Traffic Light Protocol, we share the how-to of CTF builders, making it easy for anyone—no pentesting skills required!

    Today's episode includes:

    • Choosing Your CTF Theme – Using MITRE ATT&CK and APT tracking to craft a realistic attack scenario.
    • Setting Up the Lab – Spinning up a Windows VM, configuring Sysmon, and enabling forensic logging.
    • Running the Attack Simulations – Using Atomic Red Team to generate forensic artifacts.
    • Testing & Troubleshooting – Making sure your tests actually work before unleashing them on your team.
    • Building an Engaging Story – Crafting a compelling incident narrative that challenges analysts to think like investigators.

    Resources mentioned in the podcast:

    https://drive.google.com/drive/folders/1vF3y-OlsowjX9LUOi7ywDy8VgcfhWcUX?usp=sharing

    Join the AI Cyber Security Skool Group
    Inside the group, you’ll learn how to defend against prompt injections, lock down API keys, and stop your automations from turning into costly incidents. It’s a space for cyber pros, engineers, and AI builders to share playbooks, tools, and real-world lessons on keeping AI secure.
    https://www.skool.com/ai-automation-security-5754/about?ref=3e3ebf81027c4bceb6f7cbfdbabe22ea

    Show More Show Less
    29 mins