• 2026 Predictions | Episode 35
    Jan 8 2026

    AI Security Ops | Episode 35 – 2026 Predictions

    In this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.

    Chapters

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:14) - Prediction: Grid Power Becomes the Bottleneck
    • (10:27) - Prediction: FDA Qualifies AI Drug Development Tools
    • (15:45) - Prediction: Nation-State Threat Actors Weaponize AI
    • (17:33) - Prediction: Agentic AI Dominates App Development
    • (23:07) - Closing Thoughts: Jobs, Risk & Opportunity

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com



    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    Show More Show Less
    25 mins
  • AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer
    Dec 24 2025

    Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security.
    https://discord.gg/bhis

    AI Security Ops | Episode 34 – Why Did We Create This Podcast?
    In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.

    Chapters

    • (00:00) - Intro & Welcome
    • (00:13) - Why We Started AI Security Ops
    • (00:41) - Our Mission: Stay Informed & Ahead
    • (00:56) - What We Cover: AI News & Insights
    • (01:23) - Community Q&A & Real-World Scenarios
    • (02:18) - Special Guests & Industry Leaders
    • (02:41) - Demos, How-Tos & Practical Tips
    • (03:07) - Who Should Listen & Why Subscribe
    • (03:34) - Join the Conversation & Closing

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com



    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    Show More Show Less
    4 mins
  • Community Q&A on AI Security | Episode 34
    Dec 18 2025

    Community Q&A on AI Security | Episode 34

    In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.

    We break down:

    • Why LLMs sometimes “make stuff up” and how to reduce hallucinations
    • The role of prompts, temperature, and RAG databases in accuracy
    • Prompting best practices and reasoning modes for better results
    • Legal liability: Can you sue ChatGPT for bad advice?
    • Memory features, data retention, and privacy trade-offs
    • Security paranoia: AI apps, trust, and enterprise vs free accounts
    • Practical examples like customizing AI for writing style
    • How to explain AI to your mom (or any non-technical audience)
    • Why AI isn’t magic—just math and advanced auto-complete


    Whether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.

    Chapters

    • (00:00) - Welcome & Sponsor Shoutouts
    • (00:50) - Episode Overview: Community Q&A
    • (01:19) - Q1: Will ChatGPT Make Stuff Up?
    • (07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases?
    • (11:15) - Q3: How Can AI Improve Without Ingesting Everything?
    • (22:04) - Q4: How Do You Explain AI to Non-Technical People?
    • (28:00) - Closing Remarks & Training Plug

    Brought to you by:
    Black Hills Information Security
    https://www.blackhillsinfosec.com

    Antisyphon Training
    https://www.antisyphontraining.com/

    Active Countermeasures
    https://www.activecountermeasures.com

    Wild West Hackin Fest
    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –
    https://poweredbybhis.com

    ----------------------------------------------------------------------------------------------
    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    Show More Show Less
    28 mins
  • AI News Stories | Episode 33
    Dec 11 2025

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com


    AI News | Episode 33
    In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.

    We break down:

    • AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous
    • Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.
    • Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.
    • Amazon’s private AI bug bounty: Nova models under the microscope.
    • Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.
    • PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.


    Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.

    ⏱️ Chapters

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:27) - AI-Orchestrated Cyber Espionage (Anthropic)
    • (08:10) - ShadowMQ: Critical RCE in AI Inference Engines
    • (09:54) - KawaiiGPT: Free Black-Hat LLM
    • (22:45) - Amazon Nova: Private AI Bug Bounty
    • (26:38) - Google Antigravity IDE Hacked in 24 Hours
    • (31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism

    🔗 Links
    AI-Orchestrated Cyber Espionage (Anthropic)
    ShadowMQ: Critical RCE in AI Inference Engines
    KawaiiGPT: Free Black-Hat LLM
    Amazon Nova: Private AI Bug Bounty
    Google Antigravity IDE Hacked in 24 Hours
    PROMPTFLUX: Malware Using Gemini for Polymorphism

    #AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malware

    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    Show More Show Less
    37 mins
  • Model Evasion Attacks | Episode 32
    Dec 4 2025

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com

    Model Evasion Attacks | Episode 32
    In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.

    We break down:
    - What model evasion attacks are and how they differ from data poisoning
    - How attackers tweak features to bypass classifiers (images, phishing, malware)
    - Real-world tactics like model extraction and trial-and-error evasion
    - Why non-determinism in AI models makes evasion harder to predict
    - Advanced threats: model theft, ablation, and adversarial AI
    - Defensive strategies: adversarial training, API throttling, and realistic expectations
    - Future outlook: regulatory trends, transparency, and the ongoing arms race

    Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.


    #AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats


    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:19) - What Are Model Evasion Attacks?
    • (03:58) - Image Classifiers & Pixel Tweaks
    • (07:01) - Malware Classification & Decision Boundaries
    • (10:02) - Model Theft & Extraction Attacks
    • (13:16) - Non-Determinism & Myth Busting
    • (16:07) - AI in Offensive Capabilities
    • (17:36) - Defensive Strategies & Adversarial Training
    • (20:54) - Vendor Questions & Transparency
    • (23:22) - Future Outlook & Regulatory Trends
    • (25:54) - Panel Takeaways & Closing Thoughts
    Show More Show Less
    29 mins
  • Data Poisoning | Episode 31
    Nov 27 2025

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com


    Data Poisoning Attacks | Episode 31
    In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.

    We break down:

    • What data poisoning is and why it matters
    • How attackers inject malicious samples or flip labels in training sets
    • The role of open-source repositories like Hugging Face in supply chain risk
    • New twists for LLMs: poisoning via reinforcement feedback and RAG
    • Real-world concerns like bias in ChatGPT and malicious model uploads
    • Defensive strategies: governance, provenance, versioning, and security assessments


    Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.


    #aisecurity #DataPoisoning #Cybersecurity #BHIS #llmsecurity #aithreats


    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:19) - What Is Data Poisoning?
    • (03:58) - Poisoning Classifier Models
    • (08:10) - Risks in Open-Source Data Sets
    • (12:30) - LLM-Specific Poisoning Vectors
    • (17:04) - RAG and Context Injection
    • (21:25) - Realistic Threats & Examples
    • (25:48) - Defensive Strategies & Governance
    • (28:27) - Panel Takeaways & Closing Thoughts
    Show More Show Less
    31 mins
  • AI News Stories | Episode 30
    Nov 20 2025

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com


    AI News Stories | Episode 30
    In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.

    Topics Covered:

    Only 5% of Americans are unaware of AI?
    What Pew Research reveals about AI’s penetration into everyday life and workplace usage.
    AI’s Shift to the Intimacy Economy – Project Liberty
    https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1

    Amazon to Cut Jobs and Invest in AI Infrastructure
    14,000 corporate roles eliminated—are layoffs really about efficiency or something else?
    Amazon to Cut Jobs & Invest in AI – DW
    https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365

    Local Models Less Secure than Cloud Providers?
    Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.
    Local LLMs Security Paradox – Quesma
    https://quesma.com/blog/local-llms-security-paradox

    Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.

    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:07) - AI’s Shift to the Intimacy Economy (Pew Research)
    • (19:40) - Amazon Layoffs & AI Investment
    • (27:00) - Local LLM Security Paradox
    • (36:32) - Wrap-Up & Key Takeaways
    Show More Show Less
    37 mins
  • A Conversation with Dr. Colin Shea-Blymyer | Episode 29
    Nov 13 2025

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com

    A Conversation with Dr. Colin Shea-Blymyer | Episode 29

    In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.

    Topics Covered:

    • AI governance vs. innovation: U.S. vs. EU regulatory approaches
    • The evolution of neural networks and lessons from AI history
    • AI red teaming: definitions, methodologies, and data-sharing challenges
    • Safety vs. security: where they overlap and diverge
    • Emerging risks: supply chain vulnerabilities, prompt injection, and poisoned data
    • Open weights vs. closed models: implications for research and security
    • Practical takeaways for organizations navigating AI uncertainty


    About the Panel:
    Joff Thyer, Dr. Brian Fehrman, Derek Banks
    Guest Panelist: Dr. Colin Shea-Blymyer
    https://cset.georgetown.edu/staff/colin-shea-blymyer/

    #aisecurity #aigovernance #cyberrisk #AIredteam #OpenModels #aipolicy #BHIS #AIthreats #aiincybersecurity #llmsecurity


    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Guest Welcome
    • (02:14) - Colin’s Journey: From CS to AI Governance
    • (06:33) - Lessons from AI History & Neural Network Origins
    • (10:28) - AI Red Teaming: Definitions & Methodologies
    • (15:11) - Safety vs. Security: Where They Intersect
    • (22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act
    • (33:42) - Open Models Debate: Risks & Research Benefits
    • (38:19) - Emerging Threats & Supply Chain Risks
    • (44:06) - Practical Takeaways & Closing Thoughts
    Show More Show Less
    47 mins