Leading Change cover art

Leading Change

Leading Change

Written by: Ema Roloff
Listen for free

About this listen

Welcome to Leading Change, where we dive into the real conversations shaping the future of work. Hosted by Ema Roloff, this series brings together business leaders, change-makers, and innovators to explore the intersection of technology, change management, and leadership in today’s evolving workplace. Each episode is packed with actionable insights, candid stories, and fresh perspectives on navigating transformation—whether it’s leveraging emerging tech, leading through disruption, or building resilient teams. If you’re passionate about creating meaningful change and thriving in the digital era, this is the podcast for you. Let’s redefine what it means to lead in a world where change is the only constant.Copyright 2024 All rights reserved. Economics
Episodes
  • The Gender Gap in AI Adoption
    Feb 17 2026
    Across industries, studies show women adopt generative AI tools at a rate about 25% lower than men. But does slower adoption mean falling behind or is there a bigger story at play? In this episode of Leading Change in the Wild, I dive into the Harvard research and explore why women are opting out of AI at higher rates, what role risk aversion plays, and how the future of work may actually favor uniquely human skills, many of which women excel at. 📉 Here’s what I unpack:
    • The gender gap in AI adoption and why it exists
    • How risk perception, ethics, and digital literacy influence adoption choices
    • Why technical skills are not the only driver of success in an AI-driven future
    • How soft skills and human-centered capabilities may redefine opportunity
    • What leaders can do to create inclusive, empowering AI adoption strategies
    The lesson is clear. AI is not just about who clicks “download” first. Real advantage comes from combining technology with human judgment, creativity, and ethical decision-making. 👇 Let’s discuss: Do you think slower AI adoption among women is a real disadvantage? Which human skills will be most critical in an AI-driven workplace? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.
    Show More Show Less
    7 mins
  • Inside Clawbot and Moltbook’s Leap Into Autonomous AI
    Feb 10 2026
    What happens when AI agents stop waiting for prompts and start taking action on their own? We’re beginning to see that line blur, and the headlines are starting to feel a little sci-fi. In this episode of Leading Change in the Wild, I break down what’s happening with autonomous AI agents like Claudebot and Moltbook, why they’re generating so much hype, and the very real leadership and ethical questions they raise as autonomy increases. 📉 Here’s what I unpack:
    • What makes agents like Claudebot fundamentally different from traditional AI tools
    • Why persistent memory, proactivity, and autonomy are changing the risk profile
    • Real examples of agents acting without explicit prompts, including calling their owners
    • What Moltbook reveals about AI agents interacting without human oversight
    • Why accountability, governance, and human-in-the-loop design matter more than ever
    This technology is impressive, but it also makes one thing clear: once autonomy is introduced, the questions shift from what can AI do to who is responsible when it does it. We can’t put the genie back in the bottle. The focus now has to be on ethical design, clear guardrails, and human leadership that keeps pace with the technology. 👇 Let’s discuss: How comfortable are you with autonomous AI? Where should accountability sit when agents act on their own? What guardrails feel non-negotiable as autonomy increases? 🔔 Subscribe for weekly insights on digital transformation, change management, and emerging technologies.
    Show More Show Less
    12 mins
  • Firehound and the Hidden Risk of Vibe Coding
    Feb 3 2026
    Vibe coding makes it feel easy to launch an app. Write a good prompt, ship fast, and start monetizing. But what happens when no one stops to think about security, data exposure, or who is actually protecting users? In this episode of Leading Change in the Wild, I take a closer look at Firehound and the work they are doing to expose vibe-coded apps in the App Store that are leaking user data, and why this should be a wake-up call for builders, leaders, and consumers. 📉 Here’s what I unpack:
    • Why vibe-coded apps are creating serious security vulnerabilities
    • How Firehound uncovered nearly 200 apps leaking user data
    • What the Tea app incident revealed about verification, privacy, and harm
    • Why fast AI-driven development often skips critical safeguards
    • How this changes the build versus buy conversation
    • What leaders need to consider before encouraging internal vibe coding
    AI can accelerate development, but speed without security creates risk. When we remove guardrails and expertise, the cost shows up later in user trust, data exposure, and reputational damage. This moment is a reminder that just because something can be built quickly does not mean it should be deployed without rigor. Whether you are building internally or shipping to the public, security and governance still matter. 👇 Let’s discuss: Do you think vibe coding belongs in enterprise environments? How should leaders balance speed, innovation, and security when using AI to build? 🔔 Subscribe for weekly insights on digital transformation, change management, leadership, and emerging technologies.
    Show More Show Less
    8 mins
No reviews yet