The ITSPmagazine Podcast cover art

The ITSPmagazine Podcast

The ITSPmagazine Podcast

Written by: ITSPmagazine Sean Martin Marco Ciappelli
Listen for free

About this listen

Founded in 2015, ITSPmagazine began as a vision for a publication positioned at the critical intersection of technology, cybersecurity, and society. What started as a written publication has evolved into a comprehensive repository for all their content—podcasts, articles, event coverage, interviews, videos, panels, and everything they create. This is where Sean Martin and Marco Ciappelli talk about cybersecurity, technology, society, music, storytelling, branding, conference coverage, and whatever else catches their attention. Over a decade of conversations exploring how these worlds collide, influence each other, and shape the human experience. This is where you'll find it all.© Copyright 2015-2026 ITSPmagazine, Inc. All Rights Reserved Politics & Government Social Sciences
Episodes
  • New Book: Healing the Sick Care System — Why People Matter | An Interview with Gil Bashe | An Analog Brain In A Digital Age With Marco Ciappelli
    Apr 26 2026
    PODCAST EPISODE | An Analog Brain In A Digital Age With Marco Ciappelli The United States spends 18.7% of its GDP on health — two to three times what countries like Italy spend. Italy has a longer life expectancy. So what exactly are we paying for? Gil Bashe, Chair of Global Health & Purpose at FINN Partners, former combat medic, and author of Healing the Sick Care System: Why People Matter, joined me on An Analog Brain In A Digital Age to talk about what happens when a system designed to heal people forgets that people exist. This is not a rant. It's a diagnosis — from someone who has seen the system from every angle: the battlefield, the boardroom, the pharmaceutical lobby, and the bedside of his own child. 📺 Watch | 🎙️ Listen | marcociappelli.com Gil Bashe started his career as a paratrooper combat medic. He's also the father of a child with a rare disease. He spent years as a lobbyist for the pharmaceutical industry — and he'll tell you that upfront, without flinching, before explaining why he still thinks that work mattered. He has led billion-dollar global agencies, advised companies that make life-saving drugs, and sat in rooms with the CEOs of hospital systems, pharmacy chains, and insurance companies. He asked them once if they understood each other's business models. The honest answer was: no. That's the system he's writing about. Not a broken one — a fragmented one. A system where the prime customer of healthcare has become the system itself, and the actual patients have been quietly reclassified as beneficiaries. As Gil puts it: if your washing machine breaks and you call the company and they tell you you're a "beneficiary of our appliance," you'd think they were out of their minds. You paid for it. You're a customer. Treat you like one. His new book, Healing the Sick Care System: Why People Matter, was born from a long accumulation of observations — 11 or 12 years of writing about the health ecosystem from every angle — and catalyzed by one specific moment: the assassination of the UnitedHealthcare CEO, and the public reaction to it. The fact that the killer had a following. The fact that people were applauding. Gil found that more disturbing than anyone seemed comfortable admitting. When anger reaches that level, something in the system has gone deeply, fundamentally wrong. I should say: this is a conversation I had some skin in. I'm type 1 diabetic. I know what it's like to sit across from an endocrinologist who tells you things you already know, reads from a checklist, and never quite looks up from the laptop. The human element — the education, the empathy, the sense that this person actually sees you — is often just gone. And I think most doctors started their careers because they wanted to be healers. The system squeezed it out of them. Gil agrees. He says 51% of doctors now report burnout. Nearly 60% of nurses. And that's not a coincidence. That's a design failure. The AI question we kept circling was the one nobody in healthcare leadership seems to want to answer directly: if artificial intelligence takes some of the administrative burden off doctors' shoulders, does that time go back to patients — or does the system simply use it to push more throughput? More appointments per day, not more minutes per patient. Gil's framework for thinking about this is worth keeping: IQ, EQ, and TQ. Intellectual intelligence, emotional intelligence, and technology intelligence. The doctors we need going forward aren't just the ones who scored highest on their MCATs. They're the ones who can read a room. Who can hear a patient bring in a printout from WebMD and respond with curiosity instead of dismissal. Who understand that a curious patient is a gift, not an inconvenience. He told me a story from the book — one doctor who cut his wife off mid-sentence and said, "Who are you gonna believe? Me, or a patient?" And another doctor, in Santa Monica, who performed a long and complicated surgery on his daughter, walked into the hospital cafeteria in his surgical scrubs with photographs of every step of the procedure, laid them out on the table, explained everything in plain language, and then left his personal cell phone number. "Call me with any question." They did. He picked up. That's not technology. That's not policy. That's personality. And Gil's argument — which I think is correct — is that we've built a system that systematically selects against it. The hopeful part of the conversation surprised me. I expected nuance. What I got was genuine belief. We have the best trained doctors in the world. We are the source of global medical innovation. We spend enough money — the problem isn't resources, it's alignment. The fix, as Gil sees it, starts with every part of the system — payers, pharmaceutical companies, hospital systems, policy makers — looking in the mirror and asking: am I still on mission? And then, slowly, getting back to why this system was created in ...
    Show More Show Less
    37 mins
  • On the Internet, Nobody Knows You're Not Human — And Nobody's Asking | Written by Marco Ciappelli & Read by Tape3
    Apr 24 2026
    An Analog Brain In A Digital Age — A Newsletter by Marco Ciappelli On the Internet, Nobody Knows You're Not Human — And Nobody's Asking There was a moment — brief, unrepeatable — when the internet felt like a genuinely open place. No profiles. No algorithms deciding what you deserved to see. No one monetizing the fact that you existed. You showed up, you explored, you talked to strangers in other countries about things that mattered to you, and the whole thing felt less like a product and more like a discovery. Like finding a door to another dimension. There's a cartoon that captured that moment perfectly. 1993. The New Yorker. Peter Steiner. Two dogs, one at a computer, and the line that accidentally defined an entire era of the internet: "On the Internet, nobody knows you're a dog." https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog It was funny. It was also prophetic. And it was optimistic in a way we've completely forgotten how to be about the web. Anonymity as freedom. Identity as something fluid, chosen, playful. You could be anyone. You could be from anywhere. You could reinvent yourself in real time, with no one to contradict you. Then surveillance capitalism arrived and broke the party. Cookies. Behavioral profiling. The algorithmic panopticon. Suddenly everyone knew everything. You weren't a dog anymore — you were a demographic, a data point, a cluster of purchase histories and scroll patterns. The internet that promised liberation became the most precise identity-tracking machine ever built. Anonymity collapsed under the weight of monetization. Nobody knows you're a dog became everyone knows you're a dog, what breed, what you ate for breakfast, and which vet you Googled at 2am. And now we're in the third act. A Buddhist monk named Yang Mun has 2.5 million Instagram followers. He posts silent morning meditations. He has made over $300,000 since October. Three Buddhist scholars reviewed his content and confirmed: his wisdom isn't grounded in any actual scripture. It just sounds like it is. Yang Mun doesn't exist. He was built with ChatGPT, HeyGen — an AI platform that generates realistic synthetic human video, a face, eyes, a voice, moving and breathing and entirely artificial — and a handful of other tools, by a creator operating inside what's being called "Big Slop": a venture-backed industry that manufactures fake influencers, automates their posting, and scales them to millions of followers while platforms, politely, look the other way. Hat tip to Jack Brewster, whose LinkedIn post on Yang Mun is what started this thread of thought. https://www.linkedin.com/posts/jackbrewster_a-buddhist-monk-named-yang-mun-has-25-million-activity-7451268378499137537-RPB1?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAD_QZMB_jUr1316NWqo3MgG_iFVSPTfDgY The circle has closed. And inverted. We went from nobody knows you're a dog to everyone knows you're a dog to something far stranger: Nobody knows you're not human. The dog is gone. The human is optional. Here's what interests me — and it's not the outrage part, because the outrage is easy and everyone will do it. What interests me is the McLuhan part. Marshall McLuhan said it in 1964: the medium is the message. Not the content. The medium itself. The form of transmission shapes reality more than anything transmitted through it. Yang Mun's fake wisdom is almost beside the point. The scholars confirmed it's scripturally meaningless. But it sounds right — which is precisely the tell. The content was never engineered for truth. It was engineered for the platform. For the algorithm. For the engagement pattern that rewards the feeling of depth over the presence of it. The medium produced the monk. The monk is the message. And if you zoom out — which is what I keep trying to do from Florence, where the stones beneath my feet are five hundred years old and nobody around me is particularly impressed by disruption — you see something that looks less like a technology story and more like a civilization story. We built an internet that promised connection. We built AI to simulate humans. Somewhere along the way we forgot to ask whether any of it was real — or maybe we never quite got around to asking in the first place. Because here's the thing: this didn't happen slowly enough for us to develop a moral relationship with it. There was no adjustment period. No cultural processing. The fake monk didn't represent a fall from grace. It was a first contact situation. We haven't even named what's wrong yet, let alone decided whether it matters. The analog brain — slow, emotional, context-dependent, stubbornly human — is the one thing that still notices the difference between a conversation that carries weight and one that merely carries words. It's not superior in processing power. It's just that it comes from somewhere. From experience. From loss. From the specific, irreplaceable accident of having lived a particular ...
    Show More Show Less
    10 mins
  • From RSAC Conference 2026 Floor to the CSA Report: What Enterprises Are Missing About AI Agents | A Brand Highlight Conversation with Itamar Apelblat, Co-Founder and CEO of Token Security
    Apr 24 2026

    The floor at RSAC Conference 2026 had one dominant frequency, and it was not subtle. Every booth, every hallway, every late-night conversation kept circling back to the same question: how do enterprises adopt AI agents without losing control of them? In a post-conference follow-up, Itamar Apelblat, Co-Founder and CEO of Token Security, translates what he heard on the ground into what the data now confirms.

    Token Security arrived at RSAC with a fresh set of findings, produced in collaboration with the Cloud Security Alliance and released alongside the event. The report, Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises, puts numbers to what practitioners already suspected: 65 percent of organizations have experienced an AI agent-related incident in the past twelve months, and 82 percent discovered agents running in their environment that no one had authorized. Only 21 percent have a formal process for decommissioning agents — a gap Itamar Apelblat flags as a low-hanging attack path. The short version from the conversation: visibility is the starting line, not the finish line, and the path from discovery to intent-based enforcement is where most programs are stuck.

    This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlight

    GUEST

    Itamar Apelblat, Co-Founder and CEO, Token Security | https://www.linkedin.com/in/itamar-apelblat/

    RESOURCES

    Learn more about Token Security: https://www.token.security/

    Download the CSA + Token Security Report — Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises: https://cloudsecurityalliance.org/artifacts/autonomous-but-not-controlled-ai-agent-incidents-now-common-in-enterprises

    Are you interested in telling your story?
    ▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full
    ▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight
    ▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight

    KEYWORDS

    Itamar Apelblat, Token Security, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, AI agents, agentic AI, non-human identity, identity security, shadow AI, CSA report, Cloud Security Alliance, intent-based access, AI agent governance, agent decommissioning, RSAC Conference 2026


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show More Show Less
    7 mins
No reviews yet