Cultivating Security cover art

Cultivating Security

Cultivating Security

Written by: Cultivating Security
Listen for free

About this listen

Deep examinations of industry incidents, vendor risk, and operational security decisions from 25+ years in the field. AI-narrated episodes transform written analysis into practical insights for security professionals who need to understand what really happens when security meets operational reality. No certifications required, just real-world experience.© 2026 Cultivating Security. All rights reserved. Economics Management Management & Leadership Politics & Government
Episodes
  • Week 1: Introduction: Foundations That Nobody Teaches
    Jan 6 2026
    There’s a gap in how people learn security work. Not a small one. You can get certified six ways from Sunday. You can read every framework document NIST ever published. You can know the OWASP Top 10 backwards and forwards. And you’ll still walk into your first real security role completely unprepared for how the work actually functions. Because nobody teaches you the organizational part. The political part. The part where your technically perfect solution dies in a budget meeting. The part where you discover that half your environment isn’t documented, a quarter of it is running on systems that haven’t been patched in two years, and everyone just… works around it. Nobody teaches you how to prioritize when everything looks critical. How to communicate risk to people who don’t think in terms of attack vectors. How to build security in organizations where you’re not the one making decisions. How to do the job well in environments that are messier than any textbook ever acknowledged. This series is about filling that gap. Who This Is For This is written for people with roughly 1-5 years in IT or security. You understand the technical fundamentals. You know what authentication means, how logging works, what APIs do, how cloud environments function. You’re not looking for “Security 101” content. What you’re looking for—whether you know it yet or not—is how to operate effectively in actual organizations. How to navigate the friction between what should happen and what’s actually possible. How to develop the judgment that separates people who know security from people who can actually get security work done. If you’re earlier in your career than that, some of this might not land yet. That’s fine. Bookmark it and come back when you’ve seen enough organizational reality to recognize what’s being described. If you’re later in your career, you’ve probably learned most of this already—the hard way. Maybe you’ll still find value in seeing it articulated clearly, or maybe you’ll just nod along and think “yeah, that tracks.” What This Series Covers Twelve topics, published weekly, in a deliberate sequence: Understanding Your Environment Before You Try to Secure It — why visibility and asset knowledge are foundational, not optional. Fort Knox Isn’t the Goal — learning to manage risk instead of eliminating it, and why your risk tolerance is probably miscalibrated. The Logging and Visibility Problem No One Mentions — the gap between what you think you can see and what you actually can see, especially in SaaS. The Identity Sprawl Problem — why identity is the real perimeter now and why it’s so damn hard to manage. Vendor Relationships Aren’t Partnerships — how to assess vendor risk beyond security questionnaires and why “we take security seriously” means nothing. Reporting to IT: How to Build Security When You’re Not in Charge — strategies for security practitioners working under non-security leadership. Why Security Projects Fail (And It’s Usually Not Technical) — the organizational and political dynamics that kill initiatives before they start. Reading the Room: What Your CISO Actually Cares About — translating technical risk into business language and understanding executive constraints. Compliance Is Not Security (But You Still Have to Care) — how frameworks actually work and how to use them without letting them define your entire program. When ‘Best Practices’ Don’t Apply — making intelligent trade-offs when reality prevents textbook implementations. Incident Response Is Half Politics — the organizational dynamics of actual incidents and why your IR plan won’t survive first contact. Learning from Incidents You Didn’t Have — building pattern recognition from public breaches without becoming paralyzed by threat awareness. The first four posts establish reality: what environments actually look like, how to think about risk, and the visibility and identity challenges that underpin everything else. The middle section covers organizational navigation: vendors, reporting structures, project failure modes, and communication. The final posts address judgment and crisis: compliance frameworks, adapting best practices, handling incidents, and learning from external events. Each piece stands alone. But they build on each other. Concepts introduced early get referenced later when they become relevant in new contexts. What This Series Isn’t This isn’t vendor-neutral tool reviews. This isn’t certification prep. This isn’t step-by-step technical tutorials. This isn’t going to tell you how to configure a SIEM or write detection rules or implement zero trust architecture. There are other resources for that, and many of them are quite good. This is about the stuff that matters just as much as technical skills but rarely gets explained clearly: how to operate in imperfect environments, how to communicate effectively with people ...
    Show More Show Less
    9 mins
  • When Your Vendor Drops a Security Layer (And Doesn’t Tell You)
    Dec 24 2025
    Back in November, there was a piece on KrebsOnSecurity about the Cloudflare outage — particularly companies that chose to bypass Cloudflare entirely to get their services back online. I wrote an internal analysis / lessons learned and sent it to my IT Peers on it at the time. Over the past month it’s come up in a few conversations, and this week while working on some blog posts for January, it surfaced again. There’s an angle here I think got missed in the initial coverage. So here’s my read on it, with a month of distance. (Cloudflare published a detailed post-mortem of the incident, which is worth reading for the technical depth. What follows is not about Cloudflare’s response — which was transparent and thorough — but about what happened downstream when companies and SaaS vendors chose to bypass Cloudflare entirely during the outage.) The Operational Decision That Made Sense Operationally, bypassing Cloudflare made sense in the moment. Website’s down, customers are waiting, business is bleeding. Route around the problem and get back online. Nobody’s going to argue with the urgency. According to reporting from KrebsOnSecurity, there was roughly an eight-hour window when several high-profile sites decided to bypass Cloudflare for the sake of availability. Some companies were able to pivot away temporarily; others couldn’t because their DNS was also hosted by Cloudflare or because the Cloudflare portal itself was unreachable. But here’s what kept sticking with me: Cloudflare wasn’t just a CDN or performance layer for a lot of these companies. It was a significant part of their defense-in-depth strategy. And here’s the critical nuance that I think matters: Cloudflare isn’t actually a single layer of defense-in-depth. It’s a concentration of multiple security controls delivered through a single platform. What Actually Got Removed When you pulled Cloudflare out of the path — even temporarily — you didn’t just remove “a layer.” You removed several interacting controls at once: DDoS mitigation (L3/L4/L7)Bot managementRate limitingWAF rule enforcementRequest normalization and sanitizationTLS termination and policy enforcementIP reputation filteringGeo-based access controlsAbuse and anomaly detection If you didn’t have a mature, well-tuned WAF of your own sitting behind Cloudflare — and more importantly, if you didn’t have comparable controls for rate limiting, bot detection, IP reputation, and request scrubbing — you may have just exposed yourself to multiple attack vectors simultaneously. Attack vectors that had been quietly mitigated for years, to the point where you forgot they existed. As Aaron Turner from IANS Research pointed out to KrebsOnSecurity: “Your developers could have been lazy in the past for SQL injection because Cloudflare stopped that stuff at the edge. Maybe you didn’t have the best security QA for certain things because Cloudflare was the control layer to compensate for that.” That’s the risk of outsourcing security controls without understanding what you’re outsourcing. Those controls compound each other. They’re designed to work together. Losing them together is far more dangerous than losing a single, isolated control. And here’s an important nuance from Cloudflare’s post-mortem: the outage impacted their Bot Management system and caused widespread HTTP 5xx errors across their core CDN and security services. But not all of Cloudflare’s protections failed at the same time. When vendors bypassed Cloudflare entirely to restore service, they weren’t just removing failed protections — they were removing all of Cloudflare’s protections, including DDoS mitigation, WAF rules, and rate limiting that were still functioning. The Questions That Should Have Been Asked So the question becomes: did anyone pause to think about that in the moment? Or did they just act? Did Security have a say in the decision to bypass? Did they understand they were dropping multiple layers of protection at once? Did they have equivalent controls ready to absorb the gap — not just a WAF, but rate limiting, bot detection, abuse monitoring, and more? Or was the decision made in a war room where Security wasn’t even present? The Structural Problem at Smaller SaaS Vendors Or — and I think this is closer to reality for a lot of SaaS vendors — was there no separation between the person making the operational decision and the person responsible for security? Because here’s the thing: SaaS vendors come in all shapes and sizes now. DevOps, DevSecOps, small engineering teams where the “senior” engineer is also the security person. Or even smaller vendors with outsourced vCISOs who aren’t involved in real-time operational decisions at all. When the person responding to an outage is wearing both the operations hat and the security hat, where does their mind default to under pressure? Can they even think about security and operations ...
    Show More Show Less
    19 mins
  • Security Third: Why “Security First” Makes Organizations Less Secure
    Dec 13 2025
    I heard something on a podcast the other day that’s been rattling around in my head ever since. The hosts were talking about Mike Rowe’s “Safety Third” concept — the idea that safety matters, sure, but treating it as the absolute top priority above everything else can actually make you less safe. Not because safety doesn’t matter, but because the “Safety First” mantra creates complacency. It makes people think someone else is responsible for their wellbeing. It replaces common sense and personal awareness with compliance theater. And listening to them explain it, I realized we’ve built the exact same problem in information security. The idea: declaring safety the absolute top priority makes people complacent. They stop thinking critically. They assume someone else made everything safe. That’s when accidents happen. That’s exactly what we’ve done with “Security First”. You hear it everywhere. Microsoft says “security comes first when designing any product or service” and tells employees “if you’re faced with the tradeoff between security and another priority, your answer is clear: Do security.” AWS states “cloud security at AWS is the highest priority.” Meta claims “safeguarding your data is our highest priority.” And just like the safety banners Rowe saw before being asked to do something dangerous, these declarations create a problem: they make people believe someone else is responsible for security. Over time, organizations become convinced that because they’ve said security is first, because they’ve implemented the tools and policies and compliance frameworks, they’re actually secure. They stop looking both ways. They trust that if the process allowed it, it must be safe. And that’s when things go wrong. Now look — for companies like Microsoft and AWS, maybe “security first” actually makes sense. They’re becoming the world’s infrastructure. Their product is security and availability. But the rest of us? The manufacturers, financial institutions, retailers, healthcare systems? Our business isn’t security. Our business is making things, moving money, serving customers, treating patients. Security enables that mission. It doesn’t replace it. And yet we keep demanding that security be first. We push for CISOs on boards. We complain that only 12% of S&P 500 companies have board directors with cyber credentials, that 19% of Fortune 500 companies don’t have a CISO. We point out that when Krebs on Security looked at the Fortune 100, only five companies listed a security professional on their executive leadership pages. We act like the problem is that security doesn’t have enough authority, enough budget, enough executive visibility. But what if that’s backwards? Whether you’re at a Fortune 500 with a CISO reporting to the board or a regional company where security reports to IT, the pattern is the same. What if demanding “security first” and a seat at the executive table is actually the problem—not because security doesn’t matter, but because it makes security someone else’s job? The CISO’s job. The board’s job. The security team’s job. Not everyone’s job. Maybe we should stop pretending security can actually be first — and start admitting that Security Third is closer to how this really works. The Complacency Problem Here’s what Rowe noticed on Dirty Jobs: he kept hearing “your safety is our top priority” right before someone asked him to do something objectively dangerous. Walk up a suspension bridge cable. Test a shark suit. Climb into a bosun’s chair hundreds of feet up. And over time, he and his crew started believing it. They started trusting that someone else had made everything safe for them. They stopped looking both ways before crossing the street because the sign said it was safe to cross. That’s when people got hurt. We’ve done the same thing in infosec. I’ve sat in meetings where leaders pointed at me and said “he makes us secure.” I’ve been in new hire orientations where I ask who’s responsible for information security, and the whole room points at me and my team. Once in a while, maybe once a year, someone will say “we all are” — and that’s the only right answer. I’m accountable for the security program. I build the framework, own the tools, manage the team. But every single employee is responsible for implementation. And yet somehow, despite years of trying different approaches, I still hear “he makes us secure.” I hear it from auditors. I hear it from business units in meetings with vendors. I hear it in leadership meetings. Every time I do, I know I’ve failed. Not because the program isn’t working, but because the entire world keeps saying “Security First” — which translates in people’s minds to “security equals the InfoSec team’s job.” I can stand in new hire orientations all day explaining that security is everyone’s responsibility, but I...
    Show More Show Less
    45 mins
No reviews yet