The AWS Developers Podcast cover art

The AWS Developers Podcast

The AWS Developers Podcast

Written by: Amazon Web Services
Listen for free

About this listen

Stay updated on the latest AWS news and insights for developers, wherever you are, whenever you want.All rights reserved
Episodes
  • 95% Faster: How CyberArk Used Iceberg & AI Agents to Crush Support Bottlenecks
    Apr 22 2026
    CyberArk's support team was drowning in logs. With 40+ products across SaaS and self-hosted environments, each generating logs in different formats, support engineers were spending days just preparing data before they could even start investigating a customer issue. Complex cases took up to 15 days to resolve. Moshiko Ben Abu, a Software Engineer at CyberArk — now part of Palo Alto Networks — built an AI-powered system that changed all of that. In this episode, he walks us through the full architecture: replacing manual regex parsers with AI-generated grok patterns using Amazon Bedrock and Claude, storing structured data in Apache Iceberg tables via PyIceberg with automatic schema evolution, and querying everything through Athena — all while keeping PII masked and data encrypted in S3. But the real breakthrough came with agents. Moshiko describes how he moved from single-product Bedrock agents to a swarm of specialized AI agents built with the Strands framework, where agents investigating product A can autonomously call agents for product B and C to trace root causes across the entire stack. Cases that took 15 days now resolve in hours. Simple cases drop from 4-6 hours to 15-30 minutes. Engineers handle 4x more cases per day. We also dig into the security layer — Cedar policies and Amazon Verified Permissions for agent authorization, the identity integration with AgentCore, and what's coming next: S3 Tables, AgentCore in production, and cross-platform agent collaboration with Palo Alto. Moshiko's advice for developers getting started? Learn IAM first, then compute, then databases — and write everything in CDK.
    Show More Show Less
    52 mins
  • Spec-Driven Development and the AI Unified Process — with Simon Martinelli
    Apr 14 2026
    Simon Martinelli is a Java Champion, Vaadin Champion, and Oracle ACE Pro with over three decades of experience building enterprise software. In this episode, he introduces the AI Unified Process (AIUP) — a methodology he created that combines the rigor of the Rational Unified Process with modern AI-assisted development, and makes a compelling case for why specifications, not code, should be the source of truth. We explore the difference between system use cases and user stories, and why use cases — with their actors, preconditions, main flows, alternative flows, and business rules — give AI agents far better structure to generate working code. Simon walks through the four phases of AIUP: Inception, Elaboration, Construction, and Transition, showing how specs, code, and tests evolve together iteratively while staying in sync. On the architecture side, Simon advocates for Self-Contained Systems over microservices — vertical slices that include UI, backend, and database together, reducing cognitive load for both developers and AI agents. His tech stack of choice is Vaadin for full-stack Java UI, jOOQ for type-safe explicit SQL, and Spring Boot as the application framework — a combination he argues is uniquely well-suited for AI-driven development because it keeps everything in one language with no hidden behavior. We also dig into testing strategies with Karibu Testing for browserless Vaadin tests and Playwright for end-to-end coverage, how teams of two working on bounded contexts with trunk-based development are shipping faster than ever, and why the era of AI is bringing back the Renaissance developer — the generalist who understands the full stack from business requirements to production deployment.
    Show More Show Less
    1 hr and 11 mins
  • Neurosymbolic AI: Combining GenAI with Mathematical Proof — with Danilo Poccia
    Apr 8 2026
    What if you could combine the creative power of generative AI with the mathematical certainty of formal verification? In this episode, Danilo Poccia — Principal Developer Advocate at AWS — breaks down automated reasoning, a field of AI that has been quietly powering critical AWS services for years and is now becoming essential for production AI systems. We explore why generative AI alone is not enough for high-stakes applications, and how automated reasoning provides mathematical proof — not probabilistic guesses — that your AI agents are following the rules. Danilo traces the roots of automated reasoning back to the 'symbolist' branch of AI, explains how AWS has used it internally for years to verify S3 bucket policies, encryption algorithms, and network configurations, and shows how it now converges with neural networks in what researchers call neurosymbolic AI. On the practical side, we dig into Amazon Bedrock Guardrails with Automated Reasoning checks — the first and only generative AI safeguard that uses formal logic to verify response accuracy. Danilo walks through how developers can use policy verification for agentic systems and tool access control with Cedar, and how AgentCore Gateway fits into the picture for managing MCP-based tool interactions at scale. We also cover the open source landscape: Dafny for verification-aware programming, Lean as a theorem prover, Prolog for logic programming, and the growing ecosystem of MCP servers that bring these capabilities into everyday development workflows. Whether you are building AI agents for production or just curious about what comes after prompt engineering, this conversation will change how you think about AI reliability.
    Show More Show Less
    1 hr and 8 mins
No reviews yet