Opening Voices cover art

Opening Voices

Opening Voices

Written by: Quentin Adam
Listen for free

LIMITED TIME OFFER | Get 2 Months for ₹5/month

About this listen

Opening Voices has been created to explore the major technological and strategic challenges shaping Europe. I'm Quentin Adam and in each episode, I sit down with key figures from the world of tech, open source, and policy to discuss how innovation, sovereignty, and disruptive thinking can drive Europe’s digital future. From cloud infrastructure to AI, from data governance to the open-source revolution—nothing is off-limits. Welcome to Opening Voices. Hosted on Ausha. See ausha.co/privacy-policy for more information.Clever Cloud Politics & Government
Episodes
  • Who Controls AI Compute? - Opening Voices with Steeve Morin of ZML
    Mar 5 2026

    Artificial Intelligence is no longer just a software story. It is a compute story. In this full episode of Opening Voices, Quentin Adam speaks with Steeve Morin, founder and CEO of ZML, to explore a fundamental question: who controls the compute layer of AI?


    Together, they unpack:

    • Why AI makes cloud systems compute-bound again

    • The real difference between training and inference

    • Why inference will dominate AI workloads

    • How stateful systems break 20 years of architecture patterns

    • Why power, not space, now limits data centers

    • Whether GPUs are a temporary solution

    • The rise of TPUs, NPUs and AI-dedicated chips

    • Why hardware optionality may define the next decade


    As AI becomes a universal primitive across industries, control shifts from models to infrastructure.

    This episode connects architecture, economics and semiconductor strategy, and explains why inference may become the industrial foundation of the AI era.


    Opening Voices is also available on all streaming platforms:

    • Deezer: https://www.deezer.com/show/1001774171

    • Spotify: https://open.spotify.com/show/3QTe4gKhsmhWnlLZaUxNo1

    • Apple Podcasts: https://podcasts.apple.com/fr/podcast/opening-voices/id1806281823



    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    Show More Show Less
    1 hr and 28 mins
  • Breaking the AI Compute Monopoly – Opening Voices with Steeve Morin of ZML
    Feb 25 2026

    What happens when AI infrastructure depends on a single compute ecosystem?


    In this final part of Opening Voices, Quentin Adam concludes his discussion with Steeve Morin, founder and CEO of ZML, to explore how to bring competition back into AI compute. ZML’s approach is simple in principle, difficult in execution: make AI workloads run efficiently on any chip.


    They discuss:

    • why hardware abstraction is key to breaking vendor dependency

    • how optionality of compute changes market dynamics

    • why existing hardware can still deliver major efficiency gains

    • how operating complexity locks companies into single ecosystems

    • why open source can accelerate semiconductor competition


    The future of AI will not be decided by models alone, but by who controls the compute layer beneath them.


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    Show More Show Less
    20 mins
  • Making AI Inference Affordable - Opening Voices with Steeve Morin of ZML
    Feb 18 2026

    If AI is to power the entire economy, inference must become affordable, scalable and widely available.


    In this third part of Opening Voices, Quentin Adam continues the conversation with Steeve Morin, founder and CEO of ZML, to explore what it really takes to industrialise inference. They discuss:

    • why AI must move from “chatbots as products” to AI as an infrastructure primitive

    • why inference will power every sector — banks, startups, industry

    • how efficiency gains (sometimes 5x, 10x, even 100x+) are still possible

    • why GPUs are not the only path forward

    • how new chips (TPUs, NPUs and emerging players) are reopening the semiconductor market

    • why power, density and optimisation now matter more than raw experimentation


    This episode explains why the next wave is not about building better models, but about making inference economically viable at scale.



    Episode Chapters: Making Inference Available

    00:00 – Introduction and Context

    01:38 – AI as a Primitive vs. AI as a Product

    04:19 – The Economic Unit of the Token

    05:15 – Scaling Compute for Inference

    07:31 – A Revolution Comparable to Mobile

    08:43 – Beyond GPUs

    10:56 – Compiler Errors and Efficiency Waste

    12:38 – Understanding Chips

    15:31 – The New "Blue Ocean" of Semiconductors

    20:40 – Nvidia's Strategy and Competition

    21:43 – Conclusion and Next Episode


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    Show More Show Less
    23 mins
No reviews yet