(00:00:00) Hackers Can't Use AI Tools — And What That Means for Your Team
(00:00:30) The Skill Floor Problem
(00:01:13) Guardrails Holding on Mainstream Platforms
(00:02:08) Pentagon AI Vendor Consolidation
(00:02:58) What Developers Should Take From This
A landmark study from the University of Edinburgh analysed over 100 million posts from underground cybercrime forums and returned a finding that cuts against the loudest fears in security: criminals can't get AI coding tools to work for them. Not because of ethics guardrails alone — but because AI is a capability multiplier, not a capability equaliser. Without a skill floor, the output is noise attackers can't evaluate or debug. This episode unpacks what that means for developers and engineering leaders thinking about productivity, competency gaps, and how their teams actually benefit from AI co-pilots.
On the guardrails front, Claude, Codex, and similar mainstream platforms are proving more resistant to jailbreak attempts than many predicted. Attackers falling back on WormGPT and jailbroken alternatives are finding them resource-intensive and noticeably worse. Model-level restrictions are functioning — for now. AI-assisted crime is gaining ground only in low-skill, high-volume vectors: bots, romance scams, SEO fraud. Complex attack chains remain largely unaffected.
The structural story: the Pentagon has awarded AI contracts for classified military networks to seven vendors — Google, Microsoft, AWS, Nvidia, OpenAI, Reflection, and SpaceX. Anthropic is not on the list, following a public dispute over AI ethics positioning. Vendor positioning on defence contracts is now an active policy decision, not a procurement formality. For developers building enterprise AI systems, understanding where the major platforms sit on government contracts matters more than ever.
This episode includes AI-generated content.
Show More
Show Less