Episodes

  • The Gender Gap in AI Adoption
    Feb 17 2026
    Across industries, studies show women adopt generative AI tools at a rate about 25% lower than men. But does slower adoption mean falling behind or is there a bigger story at play? In this episode of Leading Change in the Wild, I dive into the Harvard research and explore why women are opting out of AI at higher rates, what role risk aversion plays, and how the future of work may actually favor uniquely human skills, many of which women excel at. 📉 Here’s what I unpack:
    • The gender gap in AI adoption and why it exists
    • How risk perception, ethics, and digital literacy influence adoption choices
    • Why technical skills are not the only driver of success in an AI-driven future
    • How soft skills and human-centered capabilities may redefine opportunity
    • What leaders can do to create inclusive, empowering AI adoption strategies
    The lesson is clear. AI is not just about who clicks “download” first. Real advantage comes from combining technology with human judgment, creativity, and ethical decision-making. 👇 Let’s discuss: Do you think slower AI adoption among women is a real disadvantage? Which human skills will be most critical in an AI-driven workplace? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.
    Show More Show Less
    7 mins
  • Inside Clawbot and Moltbook’s Leap Into Autonomous AI
    Feb 10 2026
    What happens when AI agents stop waiting for prompts and start taking action on their own? We’re beginning to see that line blur, and the headlines are starting to feel a little sci-fi. In this episode of Leading Change in the Wild, I break down what’s happening with autonomous AI agents like Claudebot and Moltbook, why they’re generating so much hype, and the very real leadership and ethical questions they raise as autonomy increases. 📉 Here’s what I unpack:
    • What makes agents like Claudebot fundamentally different from traditional AI tools
    • Why persistent memory, proactivity, and autonomy are changing the risk profile
    • Real examples of agents acting without explicit prompts, including calling their owners
    • What Moltbook reveals about AI agents interacting without human oversight
    • Why accountability, governance, and human-in-the-loop design matter more than ever
    This technology is impressive, but it also makes one thing clear: once autonomy is introduced, the questions shift from what can AI do to who is responsible when it does it. We can’t put the genie back in the bottle. The focus now has to be on ethical design, clear guardrails, and human leadership that keeps pace with the technology. 👇 Let’s discuss: How comfortable are you with autonomous AI? Where should accountability sit when agents act on their own? What guardrails feel non-negotiable as autonomy increases? 🔔 Subscribe for weekly insights on digital transformation, change management, and emerging technologies.
    Show More Show Less
    12 mins
  • Firehound and the Hidden Risk of Vibe Coding
    Feb 3 2026
    Vibe coding makes it feel easy to launch an app. Write a good prompt, ship fast, and start monetizing. But what happens when no one stops to think about security, data exposure, or who is actually protecting users? In this episode of Leading Change in the Wild, I take a closer look at Firehound and the work they are doing to expose vibe-coded apps in the App Store that are leaking user data, and why this should be a wake-up call for builders, leaders, and consumers. 📉 Here’s what I unpack:
    • Why vibe-coded apps are creating serious security vulnerabilities
    • How Firehound uncovered nearly 200 apps leaking user data
    • What the Tea app incident revealed about verification, privacy, and harm
    • Why fast AI-driven development often skips critical safeguards
    • How this changes the build versus buy conversation
    • What leaders need to consider before encouraging internal vibe coding
    AI can accelerate development, but speed without security creates risk. When we remove guardrails and expertise, the cost shows up later in user trust, data exposure, and reputational damage. This moment is a reminder that just because something can be built quickly does not mean it should be deployed without rigor. Whether you are building internally or shipping to the public, security and governance still matter. 👇 Let’s discuss: Do you think vibe coding belongs in enterprise environments? How should leaders balance speed, innovation, and security when using AI to build? 🔔 Subscribe for weekly insights on digital transformation, change management, leadership, and emerging technologies.
    Show More Show Less
    8 mins
  • Apple & Google’s AI Partnership
    Jan 20 2026
    Is Siri finally about to answer our questions? Apple’s new partnership with Google has a lot of people talking. Some see it as Apple waving a white flag in the AI race. I see it as something much more strategic. In this episode of Leading Change in the Wild, I break down Apple’s decision to partner with Google’s Gemini AI to power Siri, what this means for the future of AI competition, and why the build versus buy conversation is resurfacing in a big way. 📉 Here’s what I unpack:
    • Why Apple partnering with Google is not an AI failure but a strategic choice
    • How this deal pushed Alphabet past a $4 trillion valuation
    • Why build versus buy is back in enterprise conversations
    • What data ownership and model control have to do with AI strategy
    • How Google is quietly positioning itself for a major AI comeback
    • What this partnership signals for leaders navigating AI investments
    AI leadership is not always about being first. Sometimes it is about knowing what to build, what to buy, and what to partner on. This moment is a reminder that strategy is about focus. Apple is doubling down on its core strengths while leveraging partnerships to stay competitive in a rapidly changing market. 👇 Let’s discuss: Is build versus buy a real option for most organizations right now? What do you think Apple’s partnership with Google signals about the future of AI competition? 🔔 Subscribe for weekly insights on digital transformation, change management, emerging technologies, and leadership.
    Show More Show Less
    8 mins
  • Why AI Alone Won’t Fix Education
    Jan 13 2026
    Test scores are dropping. Literacy rates are slipping. And suddenly, AI is being positioned as the solution that will save the education system. But is technology really the unlock, or are we missing the bigger picture? In this episode of Leading Change in the Wild, I take a closer look at the headlines around AI-driven schools like Alpha Schools and unpack what is actually driving student outcomes versus what is simply getting the most attention. 📉 Here’s what I unpack:
    • Why two hours of AI tutoring is not the real story behind student success
    • What project-based and experiential learning contribute to higher outcomes
    • How AI is often confused with true system-level transformation
    • Why digitizing classrooms is not the same as changing how learning works
    • What education can teach us about people, process, and technology working together
    AI can create capacity. But it does not automatically create better learning. If we want different outcomes for students, we need to stop chasing tools and start rethinking the system itself. Technology should support new ways of learning, not just digitize old ones. 👇 Let’s discuss: Is AI really transforming education, or just getting the credit for bigger changes? What do you think actually drives better outcomes for students today? 🔔 Subscribe for weekly insights on digital transformation, change management, and emerging technologies.
    Show More Show Less
    12 mins
  • When AI Marketing Gets Ahead of Reality
    Jan 6 2026
    Salesforce says it doesn’t regret laying off nearly 4,000 employees. It also says those layoffs weren’t really about AI. And it definitely says it trusts its AI models. So why do the headlines feel so contradictory? In this episode of Leading Change in the Wild, I break down the confusing and revealing signals Salesforce is sending about workforce reductions, Agentforce, and what trust really looks like when AI moves from pilot to production. 📉 Here’s what I unpack:
    • Why Salesforce’s AI driven layoff narrative never quite added up
    • How mixed messaging from leadership is fueling confusion and skepticism
    • What internal comments reveal about trust, accuracy, and AI readiness
    • Why clean data, governance, and business logic are being re emphasized
    • What Salesforce’s pivot back toward rule based automation signals for the broader market
    AI is not magic. It is a tool. And when even the largest software companies are recalibrating expectations, leaders need to pay attention. This moment is a reminder that AI first is not a strategy. Real value still depends on people, process, and foundations that cannot be skipped. 👇 Let’s discuss: What do you make of Salesforce’s shifting narrative? Are you seeing similar disconnects between AI promises and reality in your organization? 🔔 Subscribe for weekly insights on digital transformation, change management, and emerging technologies.
    Show More Show Less
    11 mins
  • The AI Vibe Shift in the Enterprise
    Dec 23 2025
    AI adoption at the enterprise level isn’t living up to the hype—and the “magic wand” promise of tools like Microsoft Copilot may be wearing off. In this episode of Leading Change in the Wild, I delve into what’s happening behind the scenes of enterprise AI adoption, why some tools are underperforming, and what this means for leaders navigating AI investments. 📉 Here’s what I unpack:
    • Why Microsoft Copilot isn’t seeing the adoption expected across enterprises
    • How top-down directives fail to drive real AI adoption
    • Why using AI as a “checkbox” leads to wasted licenses and disappointed teams
    • The lessons leaders can take from hype versus reality when introducing AI tools
    AI is a powerful tool, but it isn’t a strategy. Success still requires real work with your team, clear processes, and understanding the problems you’re trying to solve. 👇 Let’s discuss: Are you seeing a similar AI adoption “vibe shift” in your organization? How are you ensuring your team and processes are ready before bringing in new AI tools? 🔔 Subscribe for weekly insights on digital transformation, change management, and emerging technologies.
    Show More Show Less
    15 mins
  • Is the Metaverse Finally Dead?
    Dec 9 2025

    The year was 2021 and we were all told that virtual worlds were the future. Now, in 2025, Meta is reportedly preparing to cut its metaverse budget by up to 30%.

    In this episode of Leading Change in the Wild, I take a closer look at what this shift really means, how investors are responding, and why Meta might be turning its full attention toward AI and wearable technology instead.

    Here’s what I unpack: ✅ Why Meta is pulling back on a 60 billion dollar metaverse investment and why investors cheered ✅ Where that money is likely heading next, from AI infrastructure to wearable tech like Meta glasses ✅ Why wearables may become the next major data source for training AI models ✅ The privacy and ethical concerns tied to always-on, data-collecting devices ✅ Why the metaverse is a cautionary tale for leaders jumping into AI without a clear strategy ✅ How to avoid chasing hype and instead use technology to solve real business problems

    For years, people made it clear they didn’t want to live in a virtual world built by Big Tech. Now we’re seeing what happens when companies chase hype instead of listening to their customers. The same warning signs are showing up in today’s AI race.

    👇 Let’s discuss: - Is the metaverse really dead, or are we still in the early days? - Would you use AI-powered glasses in your daily life? - Is your organization leading with technology or strategy when it comes to AI?

    🔔 Subscribe for weekly insights on digital transformation, AI, and the human side of technology.

    Show More Show Less
    10 mins