• How Dam Secure Puts Guardrails on AI Generated Code
    Apr 29 2026
    Episode Summary

    Vibe coding is here and most organisations are nowhere near ready for what it means for security. In this episode of Secured, Cole Cornford sits down with Patrick Collins and Simon Harloff, founders of Dam Secure, to unpack how AI is reshaping software development and why the old AppSec playbook is not keeping up.

    They cover the shift from artisanal to factory model engineering, why skills and agents.md files are less reliable than people think, and why the SaaSpocalypse narrative is mostly a distraction from the work that actually matters. Patrick and Simon also walk through how Dam Secure enforces organisational security rules at plan time, before a single line of AI generated code gets written.

    Timestamps

    00:00 Trailer

    01:01 Chainguard ad

    01:28 Meet Patrick Collins and Simon Harloff from Dam Secure

    03:00 Why existing AppSec tooling never worked for developers

    05:30 The artisanal vs factory model of software development

    08:30 Hacker News, polarisation and the AI sentiment shift

    11:00 Agile, standups and processes that no longer make sense

    14:00 Bigger PRs, higher velocity and workflows without an IDE

    17:00 Skills, agents.md and the limits of deterministic guardrails

    20:00 The AppSec to developer ratio problem

    23:00 The SaaSpocalypse and why rebuilding tools is a side quest

    27:00 React, digital certificates and security through business incentives

    30:00 How Dam Secure works: secure spec and plan time enforcement

    34:00 Vibe coders, Lovable and the risk beyond professional developers

    36:00 Where to find Dam Secure and closing remarks

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    38 mins
  • (Replay Episode) Breaking Barriers: How Sam Fariborz Navigated the Aussie Cybersecurity Landscape
    Apr 16 2026
    Episode Summary

    When Sam Fariborz moved to Australia from Iran, she had been working as an IT manager. While she had plenty of experience and strong technical skills, the move to Australia was challenging, and in this episode Sam discusses some of the barriers to entry she faced. By attending cybersecurity events and reaching out to people on LinkedIn, Sam found mentors and peers who helped progress her career, and today Sam is Cybersecurity Services & Program Manager for Kmart group which employs nearly 50,000 people across Australia and New Zealand. Sam chats with Cole Cornford about how to network effectively, the growth of cybersecurity as a profession in the last couple of decades, the need for greater diversity within the industry, and plenty more.

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.

    Mentioned in this episode:

    Download your free CVE Reduction Assessment

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk.

    December 2025 - Chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    37 mins
  • What the ISM AI Update Actually Means for Cyber Teams
    Apr 1 2026
    Episode Summary

    The ISM has been updated again, and this time AI is front and centre. In this episode of Secured, Cole Cornford is joined by returning guest Toby Amodio, Practice Lead at Fujitsu Cybersecurity Services, for another instalment of Policy Wonks and Gronks, cutting through the vendor noise to talk about what the March 2026 update actually means in practice.

    They explore where AI is genuinely delivering value for cyber professionals, from automating compliance mapping and vendor assessments to streamlining pen test reporting and SOC triage. But they are equally candid about the risks: the erosion of foundational skills as junior roles get outsourced to AI, the creeping fatigue of reviewing outputs at scale, and the danger of skipping straight to full automation without the expertise to validate what the machine is doing.

    The conversation also tackles bigger picture concerns unique to Australia, sovereign AI capability, the risk of a brain drain to the US, and whether a small country can afford to decentralise its AI infrastructure. Toby closes with a sharp reminder for government CISOs: AI is just another system, and how people use it matters far more than the certifications attached to it.

    Timestamps

    00:00 Episode Trailer

    01:01 Chainguard ad

    01:28 Intro and the March 2026 ISM update

    03:00 AI hype vs real world utility

    05:00 Governance and compliance use cases

    08:00 Vendor assessments and knowledge base automation

    11:00 Skill erosion and the junior roles question

    14:00 AI in pen testing: reporting, scoping and customer experience

    17:30 The maturity model for AI adoption

    21:00 Vibe coding, slop assurance and fatigue at scale

    25:00 Agents watching agents and the bot vs bot future

    28:30 Australian AI sovereignty and the brain drain risk

    32:00 Top tip for government CISOs on AI risk

    35:00 Shadow AI and DNS log visibility

    37:00 Closing remarks

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    34 mins
  • (Replay Ep) Leading Change in Cybersecurity: Tara Whitehead’s Approach to Security Engagement
    Mar 25 2026
    Episode Summary

    Tara Whitehead is Security Engagement Manager at MYOB. Prior to becoming a cybersecurity specialist, Tara had an eclectic career, including working in advertising and international relations. In this episode Tara chats with Cole about how her non-technical background has in many ways been an asset working in security, leading change management in large enterprises, the importance of great communication skills, and plenty more.

    Timestamps

    7:15 - Tara's first days in AppSec

    10:00 - How to influence people

    12:30 - Why we should dial back on the doomsday conversation

    14:10 - Find your change champions

    21:30 - Is a non-technical background help or hindrance?

    23:30 - Communication and influencing key skills

    26:00 - Communicating with execs

    28:20 - Rapid fire questions

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.

    Mentioned in this episode:

    Download your free CVE Reduction Assessment

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk.

    December 2025 - Chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    36 mins
  • AI in AppSec: Hype, Layoffs and What's Actually Real
    Mar 4 2026
    Episode Summary

    Artificial intelligence is dominating headlines in cybersecurity, but how much of it holds up under scrutiny? In this solo episode of Secured, Cole Cornford, founder and CEO of Galah Cyber, shares his unfiltered take on three of the biggest AI narratives making waves in the AppSec space right now.

    Cole breaks down the Claude Code security announcement and why the market reaction dramatically overstated its real-world impact, arguing that the most meaningful security vulnerabilities have never been the ones static analysis tools can easily catch. He then examines Aikido's continuous penetration testing proposition, raising serious questions around noise, cost, resilience, and whether most organisations are even architected to support it.

    Finally, Cole tackles the AI job displacement narrative head-on, making the case that most high-profile tech layoffs are less about AI capability and more about mismanaged businesses using automation as convenient cover for decisions driven by poor performance and investor pressure.

    Timestamps

    00:00 – Intro & Cole's hot take on AI hype

    01:30 – Claude Code Security: what it is and why markets overreacted

    03:30 – Why meaningful vulnerabilities need context, not static analysis

    05:30 – Autofix, token waste, and who's actually using Claude Code

    08:00 – Aikido Infinite: the continuous pen testing promise

    10:00 – Cost, resilience, and noise concerns with Aikido

    12:49 – The AI jobs narrative: Cole's verdict

    14:30 – WiseTech, Block, and the smokescreen theory

    16:00 – Jobs shift, not job loss

    17:03 – Closing thoughts and solo format feedback

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.

    Mentioned in this episode:

    Call for Feedback



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    19 mins
  • How AI Pen Testing Actually Works (and Where It Breaks)
    Feb 18 2026
    Episode Summary

    AI is starting to change penetration testing, but most people are asking the wrong question. In this episode of Secured, Cole Cornford sits down with Brendan Dolan-Gavitt, AI researcher at XBOW and former NYU professor, to unpack what autonomous pen testing really is, what it can reliably do today, and what still needs humans.

    They explore why AI agents are great at scaling the boring parts of testing, like authenticated workflows and broad vulnerability coverage across huge attack surfaces, and why that does not automatically translate to deep, context-aware exploitation. The conversation also gets into the messy parts: AI systems overclaiming “serious” findings, business logic flaws that are hard to verify, audit expectations, and why scope control needs real guardrails, not vibes. From agent traces and validation models to cost curves and creative exfiltration tricks, this episode is a grounded look at where AI helps AppSec and where it can still cause damage if you trust it too much.

    Timestamps

    00:00 – Intro

    03:10 – From academia to building autonomous security tools

    05:00 – Human pen testers vs AI agents: what is actually different

    06:40 – Where AI helps most: boring tasks and low hanging fruit

    08:30 – Scale: a thousand targets vs hiring a thousand testers

    10:20 – Accessibility, economics, and Jevons paradox

    12:30 – Accountability: audit evidence, traces, and “who signs off”

    14:40 – Scope control: avoiding prod and preventing out-of-scope actions

    16:20 – Safety checkers, overseer agents, and persuasion resistance

    18:40 – The cost question: VC money, inference pricing, and efficiency

    21:20 – When AI wastes money and why prioritisation matters

    23:50 – Failure mode: overclaiming business “vulnerabilities”

    26:10 – Validation agents and adversarial peer review

    28:40 – The scary clever stuff: exfiltrating files as images

    31:00 – What AI finds well: XSS, SQLi, file traversal, hard proof bugs

    33:10 – What AI struggles with: business logic and contextual judgement

    35:20 – Hype vs skepticism and why nobody has a crystal ball

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    42 mins
  • AI, Hiring, and Trust: Why Shortcuts Break Interviews
    Feb 4 2026
    Episode Summary

    Hiring is still a human process, no matter how much AI gets injected into it. In this episode of Secured, Cole Cornford sits down with Kim Acosta, Managing Director at UCentric and former Amazon talent acquisition leader, to unpack how AI is actually changing recruitment and where it is quietly breaking trust.

    They explore how candidates are using AI in applications and technical assessments, why misuse often damages long term employability more than failing an interview, and why recruiters and hiring managers are responding with stricter controls, in person assessments, and AI detection. Kim shares what she is seeing across data, analytics, and AI roles, where demand is growing, and why human judgment, rapport, and credibility still matter far more than perfect answers.

    The conversation also covers embedded recruitment and RPO models, why soft skills matter more as teams get smaller, and what the next hiring cycle is likely to look like as big tech contracts while smaller companies continue to grow. For candidates, hiring managers, and founders alike, this episode is a grounded look at why shortcuts rarely pay off and why trust is still the real signal.

    Timestamps

    00:00 – Intro

    01:24 – Meet Kim Acosta and UCentric

    02:06 – From Amazon to starting a recruitment consultancy

    04:19 – Data engineering demand vs AI hype

    05:31 – What data engineering roles actually look like

    07:27 – Adapting business models to real market needs

    10:04 – Where AI genuinely helps recruiters

    11:09 – Custom GPTs and interview preparation

    13:43 – One way interviews and candidate slop

    15:09 – Technical assessments and AI misuse

    17:19 – Trust, failure, and reapplying the right way

    18:29 – Spotting AI generated answers in interviews

    20:19 – Rapport, eye contact, and human signals

    22:19 – Hiring for values and team fit

    23:52 – Agency vs internal vs embedded recruiters

    27:59 – RPO models and cost tradeoffs

    28:47 – Layoffs, market shifts, and salary reality

    30:57 – Where hiring is still strong

    33:10 – Why hiring and podcasts still need humans

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    34 mins
  • PSPF Changes Explained for Security Leaders
    Jan 21 2026
    Episode Summary

    The Protective Security Policy Framework is meant to guide how government manages security risk, but constant updates make it harder to implement than to understand. In this episode of Secured, Cole Cornford is joined by Toby Amodio, Practice Lead at Fujitsu Cybersecurity Services and former senior cybersecurity leader across Australian government, to break down what actually changed in the latest PSPF update and why it matters in practice.

    They examine the growing focus on personnel security and foreign interference risk, the inclusion of AI guidance that adds little beyond basic risk assessment, and the long overdue recognition of Secure Service Edge and SASE as compliant gateways. The conversation also explores why deny lists and centralised risk sharing sound sensible on paper but are far harder to enforce in reality, and why most security failures still come down to behaviour, accountability, and how technology is actually used rather than what policy says.

    Timestamps

    00:00 – Intro

    01:18 – What the PSPF is and why it exists

    02:49 – Annual updates, directives, and policy advisories

    04:19 – What actually changed in the 2025 PSPF update

    05:36 – AI in the PSPF and why it adds little value

    08:14 – Tool hype vs implementation risk

    10:32 – The AI policy advisory and trusted vendors

    14:25 – Directive 3 and clearance disclosure risks

    17:21 – Personnel security and enforcement reality

    19:41 – Secure Service Edge and SASE recognition

    23:39 – Commonwealth Technology Management directive

    25:28 – Deny lists, transparency, and security through obscurity

    28:05 – Centralised risk sharing and assessment overload

    29:52 – Policy wonk or policy gronk

    31:12 – Final takeaways and closing

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Mentioned in this episode:

    Download your free CVE Reduction Assessment

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk.

    December 2025 - Chainguard

    Call for Feedback



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Show More Show Less
    33 mins