• curl Shuts Down Bug Bounties Due to AI Slop, AWS RDS Blue/Green Cuts Switchover Downtime to ~5 Seconds, and Amazon ECR Adds Cross-Repository Layer Sharing
    Jan 24 2026

    This week on Ship It Weekly, Brian looks at three different versions of the same problem: systems are getting faster, but human attention is still the bottleneck.

    We start with curl shutting down their bug bounty program after getting flooded with low-quality “AI slop” reports. It’s not a “security vs maintainers” story, it’s an incentives and signal-to-noise story. When the cost to generate reports goes to zero, you basically DoS the people doing triage.

    Next, AWS improved RDS Blue/Green Deployments to cut writer switchover downtime to typically ~5 seconds or less (single-region). That’s a big deal, but “fast switchover” doesn’t automatically mean “safe upgrade.” Your connection pooling, retries, and app behavior still decide whether it’s a blip or a cascade.

    Third, Amazon ECR added cross-repository layer sharing. Sounds small, but if you’ve got a lot of repos and you’re constantly rebuilding/pushing the same base layers, this can reduce storage duplication and speed up pushes in real fleets.

    Lightning round covers a practical Kubernetes clientcmd write-up, a solid “robust Helm charts” post, a traceroute-on-steroids style tool, and Docker Kanvas as another signal that vendors are trying to make “local-to-cloud” workflows feel less painful.

    We wrap with Honeycomb’s interim report on their extended EU outage, and the part that always hits hardest in long incidents: managing engineer energy and coordination over multiple days is a first-class reliability concern.

    Links from this episode

    curl bug bounties shutdown https://github.com/curl/curl/pull/20312

    RDS Blue/Green faster switchover https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-rds-blue-green-deployments-reduces-downtime/

    ECR cross-repo layer sharing https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-ecr-cross-repository-layer-sharing/

    Kubernetes clientcmd apiserver access https://kubernetes.io/blog/2026/01/19/clientcmd-apiserver-access/

    Building robust Helm charts https://www.willmunn.xyz/devops/helm/kubernetes/2026/01/17/building-robust-helm-charts.html

    ttl tool https://github.com/lance0/ttl

    Docker Kanvas (InfoQ) https://www.infoq.com/news/2026/01/docker-kanvas-cloud-deployment/

    Honeycomb EU interim report https://status.honeycomb.io/incidents/pjzh0mtqw3vt

    SRE Weekly issue #504 https://sreweekly.com/sre-weekly-issue-504/

    Show More Show Less
    16 mins
  • n8n Auth RCE (CVE-2026-21877), GitHub Artifact Permissions, and AWS DevOps Agent Lessons
    Jan 16 2026

    This week on Ship It Weekly, the theme is simple: the automation layer has become a control plane, and that changes how you should think about risk.

    We start with n8n’s latest critical vulnerability, CVE-2026-21877. This one is different from the unauth “Ni8mare” issue we covered in Episode 12. It’s authenticated RCE, which means the real question isn’t only “is it internet exposed,” it’s who can log in, who can create or modify workflows, and what those workflows can reach. Takeaway: treat workflow automation tools like CI systems. They run code, they hold credentials, and they can pivot into real infrastructure.

    Next is GitHub’s new fine-grained permission for artifact metadata. Small change, big least-privilege implications for Actions workflows. It’s also a good forcing function to clean up permission sprawl across repos.

    Third is AWS’s DevOps Agent story, and the best part is that it’s not hype. It’s a real look at what it takes to operationalize agents: evaluation, observability into tool calls/decisions, and control loops with brakes and approvals. Prototype is cheap. Reliability is the work.

    Lightning round: GitHub secret scanning changes that can quietly impact governance, a punchy Claude Code “guardrails aren’t guaranteed” reminder, Block’s Goose as another example of agent workflows getting productized, and OpenCode as an “agent runner” pattern worth watching if you’re experimenting locally.

    Links

    n8n CVE-2026-21877 (authenticated RCE) https://thehackernews.com/2026/01/n8n-warns-of-cvss-100-rce-vulnerability.html?m=1

    Episode 12 (n8n “Ni8mare” / CVE-2026-21858) https://www.tellerstech.com/ship-it-weekly/n8n-critical-cve-cve-2026-21858-aws-gpu-capacity-blocks-price-hike-netflix-temporal/

    GitHub: fine-grained permission for artifact metadata (GA) https://github.blog/changelog/2026-01-13-new-fine-grained-permission-for-artifact-metadata-is-now-generally-available/

    GitHub secret scanning: extended metadata auto-enabled (Feb 18) https://github.blog/changelog/2026-01-15-secret-scanning-extended-metadata-to-be-automatically-enabled-for-certain-repositories/

    Claude Code issue thread (Bedrock guardrails gap) https://github.com/anthropics/claude-code/issues/17118

    Block Goose (tutorial + sessions/context) https://block.github.io/goose/docs/tutorials/rpi https://block.github.io/goose/docs/guides/sessions/smart-context-management

    OpenCode https://opencode.ai

    More episodes + details: https://shipitweekly.fm

    Show More Show Less
    12 mins
  • Ship It Conversations: Human-in-the-Loop Fixer Bots and AI Guardrails in CI/CD (with Gracious James)
    Jan 12 2026

    This is a guest conversation episode of Ship It Weekly (separate from the weekly news recaps).

    In this Ship It: Conversations episode I talk with Gracious James Eluvathingal about TARS, his “human-in-the-loop” fixer bot wired into CI/CD.

    We get into why he built it in the first place, how he stitches together n8n, GitHub, SSH, and guardrailed commands, and what it actually looks like when an AI agent helps with incident response without being allowed to nuke prod. We also dig into rollback phases, where humans stay in the loop, and why validating every LLM output before acting on it is the single most important guardrail.

    If you’re curious about AI agents in pipelines but hate the idea of a fully autonomous “ops bot,” this one is very much about the middle ground: segmenting workflows, limiting blast radius, and using agents to reduce toil instead of replace engineers.

    Gracious also walks through where he’d like to take TARS next (Terraform, infra-level decisions, more tools) and gives some solid advice for teams who want to experiment with agents in CI/CD without starting with “let’s give it root and see what happens.”

    Links from the episode:

    Gracious on LinkedIn: https://www.linkedin.com/in/gracious-james-eluvathingal

    TARS overview post: https://www.linkedin.com/posts/gracious-james-eluvathingal_aiagents-devops-automation-activity-7391064503892987904-psQ4

    If you found this useful, share it with the person on your team who’s poking at AI automation and worrying about guardrails.

    More information on our website: https://shipitweekly.fm

    Show More Show Less
    22 mins
  • n8n Critical CVE (CVE-2026-21858), AWS GPU Capacity Blocks Price Hike, Netflix Temporal
    Jan 9 2026

    This week on Ship It Weekly, Brian’s theme is basically: the “automation layer” is not a side tool anymore. It’s part of your perimeter, part of your reliability story, and sometimes part of your budget problem too.

    We start with the n8n security issue. A lot of teams use n8n as glue for ops workflows, which means it tends to collect credentials and touch real systems. When something like this drops, the right move is to treat it like production-adjacent infra: patch fast, restrict exposure, and assume anything stored in the tool is high value.

    Next is AWS quietly raising prices on EC2 Capacity Blocks for ML. Even if you’re not a GPU-heavy shop, it’s a useful signal: scarce compute behaves like a market. If you do rely on scheduled GPU capacity, it’s time to revisit forecasts and make sure your FinOps tripwires catch rate changes before the end-of-month surprise.

    Third is Netflix’s write-up on using Temporal for reliable cloud operations. The best takeaway is not “go adopt Temporal tomorrow.” It’s the pattern: long-running operational workflows should be resumable, observable, and safe to retry. If your critical ops are still bash scripts and brittle pipelines, you’re one transient failure away from a very dumb day.

    In the lightning round: Kubernetes Dashboard getting archived and the “ops dependencies die” reality check, Docker pushing hardened images as a safer baseline and Pipedash.

    Links

    SRE Weekly issue 504 (source roundup) https://sreweekly.com/sre-weekly-issue-504/

    n8n CVE (NVD) https://nvd.nist.gov/vuln/detail/CVE-2026-21858

    n8n community advisory https://community.n8n.io/t/security-advisory-security-vulnerability-in-n8n-versions-1-65-1-120-4/247305

    AWS price increase coverage (The Register) https://www.theregister.com/2026/01/05/aws_price_increase/

    Netflix: Temporal powering reliable cloud operations https://netflixtechblog.com/how-temporal-powers-reliable-cloud-operations-at-netflix-73c69ccb5953

    Kubernetes SIG-UI thread (Dashboard archiving) https://groups.google.com/g/kubernetes-sig-ui/c/vpYIRDMysek/m/wd2iedUKDwAJ

    Kubernetes Dashboard repo (archived) https://github.com/kubernetes/dashboard

    Pipedash https://github.com/hcavarsan/pipedash

    Docker Hardened Images https://www.docker.com/blog/docker-hardened-images-for-every-developer/

    More episodes and more details on this episode can be found on our website: https://shipitweekly.fm

    Show More Show Less
    16 mins
  • Ship It Conversations: Backstage vs Internal IDPs, and Why DevEx Muscle Matters (with Danny Teller)
    Jan 6 2026

    This is a guest conversation episode of Ship It Weekly (separate from the weekly news recaps).

    I sat down with Danny Teller, a DevOps Architect and Tech Lead Manager at Tipalti, to talk about internal developer platforms and the reality behind “just set up a developer portal.” We get into Backstage versus internal IDPs, why adoption is the real battle, and why platform/DevEx maturity matters more than whatever tool you pick.

    What we covered

    Backstage vs internal IDPs Backstage is a solid starting point for a developer portal, but it doesn’t magically create standards, ownership, or platform maturity. We talk about when Backstage fits, and when teams end up building internal tooling anyway.

    DevEx muscle (the make-or-break) Danny’s take: the portal UI is the easy part. The hard part is the ongoing work that makes it useful: paved roads, sane defaults, support, and keeping the catalog/data accurate so engineers trust it.

    Where teams get burned Common failure mode: teams ship a portal first, then realize they don’t have the resourcing, ownership, or workflows behind it. Adoption fades fast if the portal doesn’t remove real friction.

    A build vs buy gut check We walk through practical signals that push you toward open source Backstage, a managed Backstage offering, or a commercial portal. We also hit the maintenance trap: if you build too much, you’ve created a second product.

    Links and resources

    Danny Teller's Linkedin: https://www.linkedin.com/in/danny-teller/

    matlas — one CLI for Atlas and MongoDB: https://github.com/teabranch/matlas-cli

    Backstage: https://backstage.io/

    Roadie (managed Backstage): https://roadie.io/

    Port: https://www.port.io/

    Cortex: https://www.cortex.io/

    OpsLevel: https://www.opslevel.com/

    Atlassian Compass: https://www.atlassian.com/software/compass

    Humanitec Platform Orchestrator: https://humanitec.com/products/platform-orchestrator

    Northflank: https://northflank.com/

    If you enjoyed this episode Ship It Weekly is still the weekly news recap, and I’m dropping these guest convos in between. Follow/subscribe so you catch both, and if this was useful, share it with a platform/devex friend and leave a quick rating or review. It helps more than it should.

    Visit our website at https://www.shipitweekly.fm

    Show More Show Less
    26 mins
  • Fail Small, IaC Control Planes, and Automated RCA
    Jan 3 2026

    This week on Ship It Weekly, Brian kicks off the new year with one theme: automation is getting faster, and that makes blast radius and oversight matter more than ever.

    We start with Cloudflare’s “fail small” mindset. The core idea is simple: big outages usually come from correlated failure, not one box dying. If a bad change lands everywhere at once, you’re toast. “Fail small” is about forcing problems to stay local so you can stop the bleeding before it becomes global.

    Next is Pulumi’s push to be the control plane for all your IaC, including Terraform and HCL. The interesting part isn’t syntax wars. It’s the workflow layer: approvals, policy enforcement, audit trails, drift, and how teams standardize without signing up for a multi-year rewrite.

    Third is Meta’s DrP, a root cause analysis platform that turns repeated incident investigation steps into software. Even if you’re not Meta, the pattern is worth stealing: automate the first 10–15 minutes of your most common incident types so on-call is consistent no matter who’s holding the pager.

    In the lightning round: a follow-up on GitHub Actions direction (and a quick callback to Episode 6’s runner pricing pause), AWS ECR creating repos on push, a smarter take on incident metrics, Terraform drift visibility, and parallel “coding agent” workflows.

    We wrap with a human reminder about the ironies of automation: automation doesn’t remove responsibility, it moves it. Faster systems require better brakes, better observability, and easier rollback.

    Links from this episode

    SRE Weekly issue 503 (source roundup - CloudFlare) https://sreweekly.com/sre-weekly-issue-503/

    Pulumi: all IaC, including Terraform and HCL https://www.pulumi.com/blog/all-iac-including-terraform-and-hcl/

    Meta DrP: https://engineering.fb.com/2025/12/19/data-infrastructure/drp-metas-root-cause-analysis-platform-at-scale/

    GitHub Actions: “Let’s talk about GitHub Actions” https://github.blog/news-insights/product-news/lets-talk-about-github-actions/

    Episode 6 (GitHub runner pricing pause, Terraform Cloud limits, AI in CI) https://www.tellerstech.com/ship-it-weekly/github-runner-pricing-pause-terraform-cloud-limits-and-ai-in-ci/

    AWS ECR: create repositories on push https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ecr-creating-repositories-on-push/

    DriftHound https://drifthound.io/

    Superset https://superset.sh/

    More episodes + contact info, and more details on this episode can be found on our website: https://shipitweekly.fm

    Show More Show Less
    18 mins
  • Ship It Conversations: From Full-Stack to Cloud/DevOps, One Project at a Time (with Eric Paatey)
    Dec 30 2025

    This is a guest conversation episode of Ship It Weekly (separate from the weekly news recaps).

    I sat down with Eric Paatey, a Cloud & DevOps Engineer who’s been transitioning from full-stack web development into cloud/devops, and building real skills through hands-on projects instead of just collecting tools and buzzwords.

    We talk about what that transition actually feels like, what’s helped most, and why you don’t need a rack of servers to learn DevOps.

    What we covered Eric’s path into DevOps How he moved from building web apps to caring about pipelines, infra, scalability, reliability, and automation. The “oh… code is only part of the job” moment that pushes a lot of people toward DevOps.

    The WHY behind DevOps Eric’s take: DevOps is mainly about breaking down silos and improving communication between dev, ops, security, and the business. We also hit the idea from The DevOps Handbook: small batches win. The bigger the release, the harder it is to recover when something breaks.

    Leveling up without drowning in tools DevOps has an endless tool list, so we talked about how to stay current without burning out. Eric’s recommendation: stay connected to the industry. Meet people, join user groups, go to events, and don’t silo yourself.

    The homelab mindset (and why simple is fine) Eric shared his “homelab on the go” setup and why the hardware isn’t the point. It’s about using a safe environment to build habits: automation, debugging, systems thinking, monitoring, breaking things, recovering, and improving the design.

    A practical first project for aspiring DevOps engineers We talked through a starter project you can actually show in interviews: Dockerize a simple app, deploy it behind an ALB, and learn basic networking/security along the way. You don’t need to understand everything on day one, but you do need to build things and learn what breaks.

    Agentic AI and guardrails We also touched on AI agents and MCPs, what they could mean for ops teams, and why you should not give agents full access to anything. Least privilege and policy guardrails matter, because “non-deterministic” and “prod permissions” is a scary combo.

    Links and resources Eric Paatey on LinkedIn: https://www.linkedin.com/in/eric-paatey-72a87799/

    Eric’s website/portfolio: https://ericpaatey.com/

    If you enjoyed this episode Ship It Weekly is still the weekly news recap, and I’m dropping these guest convos in between. Follow/subscribe so you catch both, and if this was useful, share it with a coworker or your on-call buddy and leave a quick rating or review. It helps more than it should.

    Visit our website at https://www.shipitweekly.fm

    Show More Show Less
    23 mins
  • Cloudflare’s Workers Scheduler, AWS DBs on Vercel, and JIT Admin Access
    Dec 27 2025

    This week on Ship It Weekly, Brian looks at real platform engineering in the wild.

    We start with Cloudflare’s write-up on building an internal maintenance scheduler on Workers. It’s not marketing fluff. It’s “we hit memory limits, changed the model, and stopped pulling giant datasets into the runtime.”

    Next up: AWS databases are now available inside the Vercel Marketplace. This is a quiet shift with loud consequences. Devs can click-button real AWS databases from the same place they deploy apps, and platform teams still own the guardrails: account sprawl, billing/tagging, audit trails, region choices, and networking posture.

    Third story: TEAM (Temporary Elevated Access Management) for IAM Identity Center. Time-bound elevation with approvals, automatic expiry, and auditing. We cover how this fits alongside break-glass and why auto-expiry is the difference between least-privilege and privilege creep.

    Lightning round: GitHub Actions workflow page performance improvements, Lambda Managed Instances (slightly cursed but interesting), a quick atmos tooling blip, and k8sdiagram.fun for explaining k8s to humans.

    We close with Marc Brooker’s “What Now? Handling Errors in Large Systems” and the takeaway: error handling isn’t a local code decision, it’s architecture. Crashing vs retrying vs continuing only makes sense when you understand correlation and blast radius.

    shipitweekly.fm has links + the contact email. Want to be a guest? Reach out. And if you’re enjoying the show, follow/subscribe and leave a quick rating or review. It helps a ton.

    Links from this episode

    Cloudflare https://blog.cloudflare.com/building-our-maintenance-scheduler-on-workers/ AWS on Vercel https://aws.amazon.com/about-aws/whats-new/2025/12/aws-databases-are-available-on-the-vercel/ https://vercel.com/changelog/aws-databases-now-available-on-the-vercel-marketplace TEAM https://aws-samples.github.io/iam-identity-center-team/ https://github.com/aws-samples/iam-identity-center-team GitHub Actions https://github.blog/changelog/2025-12-22-improved-performance-for-github-actions-workflows-page/ Lambda Managed Instances https://docs.aws.amazon.com/lambda/latest/dg/lambda-managed-instances.html Atmos https://github.com/cloudposse/atmos/issues k8sdiagram.fun https://k8sdiagram.fun/ Marc Brooker https://brooker.co.za/blog/2025/11/20/what-now.html

    Show More Show Less
    16 mins