• How to build a monolith the right way
    Apr 24 2026

    Share Episode

    We sit down with Ian Duncan, senior staff engineer on the stability team at Mercury, to discuss the delicate balance of choosing your tech stack and the implications. That means explore the concept of the novelty budget or frequently known as "Choose Boring Technology". It emphasizes why companies should carefully spend their innovation tokens on things that actually move the needle, rather than reinventing the wheel.


    Mercury leverages simple technology like Postgres and EC2 instances alongside high-innovation bets like Haskell and Nix to maintain stability. The conversation unpacks the hidden complexities of over-relying on standard tools, sharing a cautionary tale about using a Postgres table as a massive queuing system until it consumed all the database resources and caused login failures. To solve architectural scaling without descending into nanoservice madness, we jump to discussing monolithic build systems. By leveraging hermetically sealed, modular build targets, teams can achieve massive parallelism and avoid endless local rebuilds while maintaining a single coherent view of the codebase.


    We also advocate for separating management tools from primary systems by utilizing dedicated control planes, and touch on the rising popularity of durable execution frameworks like Temporal to handle resilient workflows. And it turns out Ian might be a bigger advocate of microservices that he thought!


    💡 Notable Links:
    • Ian's blog
    • Book: Blah Blah Blah
    • Using Innovation Tokens
    • Novelty budget
    • Buck2
    🎯 Picks:
    • Warren - Why Archers Didn’t Volley Fire
    • Ian - Band - Gloryhammer
    Show More Show Less
    45 mins
  • Infrastructure as code: why you can never avoid thinking
    Apr 17 2026

    Share Episode

    We explore the past and AI-driven future of Infrastructure as Code with Cloud Posse's Eric Osterman, discussing various IaC traumas. Erik maintains the world's largest repository of open-source IaC modules. Looking back at the dark ages of infrastructure, from the early days of raw CloudFormation and Capistrano to the rise and fall of tools like Puppet and Chef, we discuss the organic, messy growth of cloud environments. Where organizations frequently scale a single AWS account into a tangled web rather than adopting a robust multi-account architecture guided by a proper framework.


    The conversation then shifts to the modern era of rapid integration of infrastructure development. While generating IaC with large language models can be incredibly fast, it introduces severe risks if left unchecked, and we explore how organizations can protect themselves by relying on Architectural Decision Records (ADRs) and predefined "skills". The hopeful goal of ensuring autonomous deployments are compliant, reproducible, and secure instead of relying on hallucinated architecture.


    Finally, we tackle the compounding issue of code review in an age where developers can produce a year's worth of engineering slop progress in a single week.


    💡 Notable Links:
    • Atmos framework
    • Checkov - IaC Validation
    • Code Rabbit
    • ✨ Episode: Agent Skills
    • ✨ Episode: All about MCPs
    🎯 Picks:
    • Warren - Project Hail Mary
    • Erik - Everybody's free to wear sunscreen & Book: The 10X Rule
    Show More Show Less
    53 mins
  • GPU versus CPU: What is engineering really doing for us
    Apr 9 2026

    Share Episode

    We sit down with Jaikumar Ganesh, Head of Engineering at AnyScale, to explore the intricacies of heterogeneous compute. He unpacks the growing CPU/GPU divide, detailing how ML pipelines require precise orchestration — using CPUs for data reading and writing while leveraging expensive, massive-die GPUs for chunking and embedding.


    Warren brings the insight that, with AI agents rapidly changing how software is created, building is now a requirement of the business-focused team. And our guest shares how sales and marketing departments are increasingly using tools like Cursor and Claude to develop their own workflow automations. We discuss the challenges that this shift begs: what is engineering really doing for us?


    JK emphasizes that the core responsibility of the engineering organization is reliability. While anyone can generate code, running stable production software requires the deep "battle scars", robust observability, and meticulous release processes that only a dedicated engineering team can provide.


    That results in needing to find the right talent. But, finding the talent to maintain this critical infrastructure isn't easy, which is why JK advocates for highly creative hiring strategies. He shares incredible success stories of bypassing traditional recruiting by running hiring ads in foreign-language movies at local movie theaters and setting up booths at social food festivals to find uniquely qualified candidates.


    🎯 Picks:
    • Warren - Archer's Don't Fire Volleys
    • JK - Book: The Explorer's Gene
    Show More Show Less
    41 mins
  • Upskilling your agents
    Mar 28 2026

    Share Episode

    In this adventure, we sit down with Dan Wahlin, Principal of DevRel for JavaScript, AI, and Cloud at Microsoft, to explore the complexities of modern infrastructure. We examine how cloud platforms like Azure function as "building blocks". Which of course, can quickly become overwhelming without the right instruction manuals. To bridge this gap, one potential solution we discuss is the emerging reliance on AI "skills"—specialized markdown files. They can give coding agents the exact knowledge needed to deploy poorly documented complex open-source projects to container apps without requiring deep infrastructure expertise.


    And we are saying the silent part outloud, as we review how handing the keys over to autonomous agents introduces terrifying new attack vectors. It's the security nightmare of prompt injections and the careless execution of unvetted AI skills. Which is a blast from the past, and we reminisce how current downloading of random agent instructions to running untrusted executables from early internet sites. While tools like OpenClaw purport to offer incredible automation, such as allowing agents to scour the internet and execute code without human oversight, it's already led us to disastrous leaks of API keys. We emphasize the critical necessity of validating skills through trusted repositories where even having agents perform security reviews on the code before execution is not enough.


    Finally, we tackle the philosophical debate around AI productivity and why Dorota's LLMs raise the floor and not the ceiling is so spot on. The standout pick requires mentioning, a fascinating 1983 paper titled "Ironies of Automation" by Lisanne Bainbridge. This paper perfectly predicts our current dilemma: automating systems often leaves the most complex, difficult tasks to human operators, proving that as automation scales, the need for rigorous human monitoring actually increases, destroying the very value that was attempting to be captured by the original innovation.


    💡 Notable Links:
    • Agent Skill Marketplace
    • AI Fatigue is real
    • Episode: Does Productivity even exist?
    🎯 Picks:
    • Warren - Paper: Ironies of Automation (& AI)
    • Dan - Tool: SkillShare
    Show More Show Less
    53 mins
  • There's no way it's DNS...
    Mar 20 2026

    Share Episode

    How much do you really know about the protocol that everything is built upon? This week, we go behind the scenes with Simone Carletti, a 13-year industry veteran and CTO at DNSimple, to explore the hidden complexities of DNS. We attempt to uncover why exactly DNS is often the last place developers check during an outage, drawing fascinating parallels between modern web framework abstractions and network-level opaqueness.


    Simone shares why his team relies on bare-metal machines instead of cloud providers to run their Erlang-based authoritative name servers, highlighting the critical need to control BGP routing. We trade incredible war stories, from Facebook locking themselves out of their own data centers due to a BGP error, to a massive 2014 DDoS attack that left DNSimple unable to access their own log aggregation service. The conversation also tackles the reality of implementing new standards like SVCB and HTTPS records, and why widespread DNSSEC adoption might require an industry-wide mandate.


    And of course we have the picks, but I'm not spoiling this weeks, just yet...


    💡 Notable Links:
    • Episode: IPv6
    • SVCB + HTTPS DNS Resource Records RFC 9460
    • Avian Carrier RFC 1149
    🎯 Picks:
    • Warren - Book: One Second After
    • Simone - Recommended diving locations in Italy and Wreck diving projects
    Show More Show Less
    52 mins
  • Getting better at networking
    Mar 15 2026

    Share Episode

    We are joined by Daan Boerlage, CTO at Mavexa as we tackle the long-awaited arrival of IPv6 in cloud infrastructure. Here, we highlight how migrating to an IPv6-native setup eliminates public/private subnet complexity and expensive NAT gateways natively. As well as entirely sidestepping the nightmare of IP collisions during VPC peering.


    Beyond the financial savings of ditching IPv4 charges, we explore the technical superiority of IPv6. Daan breaks down just how mind-bogglingly large the address space is, and focuses on how it solves serverless IP exhaustion while systematically debunking the pervasive myth that NAT is a security feature. We also discuss how IPv6's end-to-end connectivity, paving the way for next-generation protocols like QUIC, HTTP/3, and WebTransport.


    The episode rounds out with a cathartic venting session about legacy architecture, detailing a grueling nine-year migration away from a central shared database that ironically culminated in a move to Salesforce. Almost by design, Daan recommends his pick, praising its intuitive use of signals and fine-grained reactivity over React. And Warren's pick explores storing data in the internet itself by leveraging the dwell time of ICMP ping packets.


    💡 Notable Links:
    • FOSDEM talk on the internet of threads
    • Hilbert Map of IPv6 address space
    🎯 Picks:
    • Warren - Harder Drive: what we didn't want or need
    • Daan - SolidJS
    Show More Show Less
    49 mins
  • Varied Designer Does Vibecoding: Why testing always wins
    Mar 6 2026

    Share Episode

    In this episode, we examine how the software industry is fundamentally changing. We're joined by our expert guest, Matt Edmunds, a long-time UX director, principal designer, and Principal UX Consultant at Tiny Pixls. The episode kicks, analyzing how early AI implementation in Applicant Tracking Systems (ATS) created rigid hiring processes that actively filter out the varied candidates who actually bring necessary diversity to engineering teams.


    Of course we get to the world of "vibe coding", and revisit the poor LLM usage highlighted in the DORA 2025 report, exploring how professionals without traditional software engineering backgrounds are leveraging models to generate functional code.


    Matt details his hands-on experience using the latest models of Claude Opus and Gemini Pro, successfully building low-level C virtual audio driver in 30 minutes drive by personal needs. We discuss the inherent challenges of large context windows, and coin the term "guess-driven development". To combat these hallucinations, Matt shares his strategy of using question-based prompting and anchoring the AI with comprehensive test files and documented schemas, which the models treat as an undeniable source of truth.


    Beyond the code, we look at the broader economic and physical limitations of the current AI boom, noting that AI providers are operating at massive financial losses while awaiting hardware efficiency improvements.


    💡 Notable Links:
    • Oatmeal on hating AI Art
    • Episode: DORA 2025 Report
    🎯 Picks:
    • Warren - Book: Start With Why
    • Matt - Book: Creativity, Inc.
    Show More Show Less
    58 mins
  • DevOps trifecta: documentation, reliability, and feature flags
    Feb 20 2026

    Share Episode

    We dive into the shifting landscape of developer relations and the new necessity of optimizing documentation for both humans and LLMs. Melinda Fekete joins from Unleash, and suggests transitioning to platform to help get this right by utilizing LLMs.txt files to cleanly expose content to AI models.


    The conversation then takes a look at the June GCP outage, which was triggered by a single IAM policy change. This illustrates that even with world-class CI/CD pipelines, deploying code using runtime controls such as feature flags is still risky. Feature flags can't even save GCP and other cloud providers, so what hope do the rest of us have.


    Finally, we discuss the practical implementation of these systems, advocating for "boring technology" like polling over streaming to ensure reliability, and conducting internal "breakathons" to test features before a full rollout.


    💡 Notable Links:
    • Diátaxis - Who is article this for?
    • Fern - Docs Platform
    • CloudFlare - Feature Flag causes outage
    • AWS - Graceful degredation
    • Building for 5 nines reliability
    • Episode: Latency is always more important than freshness
    • Episode: DORA 2025 Report
    🎯 Picks:
    • Warren - Show: Bosch - LA Detective procedural
    • Melinda - Wavelength - Party Game
    Show More Show Less
    32 mins