• Can AI Be Both Sovereign and Global? With Anne Bouverot
    Jan 7 2026

    In this episode of Regulating AI Talk, we sit down with Anne Bouverot, France’s Special Envoy for AI, to unpack one of the defining tensions of our time. As nations race to protect democratic values, economic competitiveness, and technological autonomy, AI refuses to respect borders. Anne explores how governments can balance AI sovereignty with global cooperation, why fragmented regulation could backfire, and what it will take to build shared rules for a technology shaping geopolitics, markets, and society itself. A must-listen conversation on power, policy, and the future of AI governance.

    Show More Show Less
    33 mins
  • AI Governance & Global Policy at ASEAN | Sanjay Puri in Conversation with Congressman Jay Obernolte
    Jan 2 2026

    Artificial Intelligence is reshaping economies, governments, and global cooperation.

    At the ASEAN platform, Sanjay Puri, Founder & Chairperson, sits down with U.S. Congressman Jay Obernolte to discuss the evolving landscape of AI governance, AI policy, and international collaboration.


    This insightful conversation explores:

    • The future of AI regulation and governance
    • How governments can balance innovation and responsibility
    • The role of ASEAN and global partnerships in shaping AI policy
    • The importance of ethical, transparent, and inclusive AI frameworks


    📌 Watch the full discussion to understand how policymakers and industry leaders are working together to shape the future of AI.



    Show More Show Less
    12 mins
  • Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast
    Dec 18 2025

    In this episode of the Regulating AI Podcast, we speak with Camille Carlton, Director of Policy at the Center for Humane Technology, a leading voice in AI regulation, chatbot safety, and public-interest technology.


    Camille is directly involved in landmark lawsuits against CharacterAI and OpenAI CEO Sam Altman, placing her at the forefront of debates around AI accountability, AI companions, and platform liability.


    This conversation examines the mental-health risks of AI chatbots, the rise of AI companions, and why certain conversational systems may pose public-health concerns, especially for younger and socially isolated users. Camille also breaks down how AI governance frameworks differ across U.S. states, Congress, and the EU AI Act, and outlines what practical, enforceable AI policy could look like in the years ahead.



    Key Takeaways


    AI Chatbots as a Public-Health Risk


    Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent mental-health and safety concerns.


    Regulating Chatbots vs. Foundation Models


    Why high-risk conversational AI systems require different regulatory treatment than general-purpose LLMs and foundation models.


    Global AI Governance Lessons


    What the EU AI Act, U.S. states, and Congress can learn from each other when designing balanced, risk-based AI regulation.


    Transparency, Design & Accountability


    How a light-touch but firm AI policy approach can improve transparency, platform accountability, and data access without slowing innovation.


    Why AI Personhood Is a Dangerous Idea


    How framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement.



    Subscribe to Regulating AI for expert conversations on AI governance, responsible AI, technology policy, and the future of regulation.


    #RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks


    #AICompanions


    Resources Mentioned:

    https://www.linkedin.com/in/camille-carlton


    https://www.humanetech.com/

    https://www.humanetech.com/substack


    https://www.humanetech.com/podcast

    https://www.humanetech.com/landing/the-ai-dilemma

    https://centerforhumanetechnology.substack.com/p/ai-product-liability


    https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai

    Show More Show Less
    41 mins
  • Karin Stephan on Building Emotionally Intelligent Technology | RegulatingAI Podcast
    Dec 10 2025

    In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being.


    With a career that spans music, psychology, and digital innovation, Karin shares how she’s building privacy-first AI tools designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress.


    Together, they unpack the delicate balance between AI innovation and human empathy, the ethics of AI chatbots for youth, and what it really takes to design technology that heals instead of harms.


    Key Takeaways:


    AI and Empathy: Why emotional intelligence—not algorithms—must guide the future of mental health tech.


    Teens and Trust: How technology exploits belonging, and what must change to rebuild digital trust.


    Regulating Responsibly: Why the answer isn’t bans, but thoughtful, transparent policy shaped with youth input.


    Privacy by Design: How ethical AI can protect privacy without compromising impact.


    Bridging the Global Mental Health Gap: Why collaboration and compassion matter as much as code.


    If this conversation made you rethink the relationship between AI and mental health, hit like, share, and subscribe to RegulatingAI for more insights on building technology that serves humanity.


    Resources Mentioned:

    https://www.linkedin.com/in/karinstephan/

    Show More Show Less
    36 mins
  • The Human Side of Machine Intelligence: Jeff McMillan on AI at Morgan Stanley – RegulatingAI Podcast
    Dec 5 2025

    In this episode of RegulatingAI, host Sanjay Puri sits down with Jeff McMillan, Head of Firmwide Artificial Intelligence at Morgan Stanley. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness generative AI responsibly striking the right balance between innovation, governance, and ethics.


    Key Takeaways:


    • AI Governance: Why collaboration across business, legal, and compliance is the foundation of effective AI oversight.
    • Human-in-the-Loop: Morgan Stanley’s core principle—keeping humans accountable and central in every AI decision.
    • Education First: Jeff’s golden rule—spend 90% of your AI budget training people before building tech.
    • AI as a Risk Mitigator: How AI can actually strengthen compliance and risk management when designed right.
    • Culture Over Code: Why successful AI transformation is less about algorithms and more about mindset, structure, and leadership.

    If you enjoyed this conversation, don’t forget to like, share, and subscribe to RegulatingAI for more insights from global leaders shaping the future of responsible AI.


    #RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI



    Resources Mentioned:

    https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/

    Recent Podcast


    https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849

    Morgan Stanley External Facing Website sharing some of the work we are doing on AI

    https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team

    Show More Show Less
    52 mins
  • Trump’s AI Executive Order vs California: Senator Scott Wiener Responds | RegulatingAI Podcast
    Nov 27 2025

    In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States.


    As President Donald Trump’s new AI executive order pushes for federal control over AI regulation, Senator Wiener explains why states like California must retain the power to regulate artificial intelligence — and how California’s laws could influence global AI governance.


    Senator Wiener is the author of:


    SB 1047 – California’s proposed liability bill for high-risk AI systems


    SB 53 – California’s new AI transparency law, now in effect


    We dive deep into:


    • The battle between federal vs. state AI regulation


    • Why California remains the frontline of AI governance


    • The real impact of Trump’s AI executive order


    • Growing risks of AI-driven job displacement


    • How governments can balance innovation with public safety


    • The future of responsible and accountable AI development



    🔑 KEY TAKEAWAYS


    1. California’s Policy Power


    California’s tech dominance allows it to shape national and global AI standards even when Congress stalls.


    2. SB 1047 vs. SB 53 Explained


    SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly disclose safety and risk practices.


    3. Why Transparency Won


    After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53.


    4. AI Job Disruption Is Accelerating


    Senator Wiener warns that workforce displacement from AI is happening faster than expected.


    5. A Realistic Middle Path


    He advocates for smart AI guardrails — avoiding both overregulation and total deregulation.


    If you found this conversation valuable, don’t forget to like, subscribe, and share to stay updated on global conversations shaping the future of AI governance.


    Resources Mentioned:

    https://www.linkedin.com/company/ascet-center-of-excellence

    https://www.linkedin.com/in/james-h-dickerson-phd


    Show More Show Less
    28 mins
  • #141 Inside AI Policy with Congresswoman Sarah McBride | RegulatingAI Podcast with Sanjay Puri
    Nov 20 2025

    In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race.


    From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her human-centered vision for how AI can advance democracy, fairness, and opportunity for everyone.


    Here are 5 key takeaways from the conversation:


    💡 Finding the “Goldilocks” Zone: How to strike that just-right balance where AI regulation protects people without holding back innovation.


    🏛️ Federal vs. State Regulation: Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility.


    👩‍💻 AI and the Workforce: What policymakers can do to make sure AI augments human talent rather than replacing it.


    🌎 Democracy vs. Authoritarianism: The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy.


    🔔 Delaware’s Legacy of Innovation: How Delaware’s collaborative approach to growth can be a model for responsible tech leadership.


    If you enjoyed this episode, don’t forget to like, comment, share, and subscribe to RegulatingAI for more conversations with global policymakers shaping the future of artificial intelligence.


    Resources Mentioned:

    mcbride.house.gov

    https://mcbride.house.gov/about

    Show More Show Less
    25 mins
  • Small Nations & Big AI Ideas
    Nov 7 2025

    Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.


    In this episode, I sit down with Armenia's Minister of Finance to discuss:


    ~ Why Nvidia is building a massive AI factory in Armenia

    ~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies

    ~ The secret advantage: abundant energy + Soviet-era engineering talent

    ~ Is the AI investment boom a bubble or the real deal?

    ~ How AI is already being used in tax collection and government services

    ~ The peace agreement with Azerbaijan and what it means for tech investment

    ~ Why the "Middle Corridor" could make Armenia the next tech destination


    The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.


    About the Guest:

    Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.


    🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation


    💬 Leave a comment: What surprised you most about Armenia's AI strategy?

    🔔 Hit the bell to catch our next episode

    Show More Show Less
    16 mins