• Beyond the CPU vs GPU War: Rethinking AI Compute at the System Level
    Apr 28 2026
    In this episode of Tech Threads, Nandan Nayampally, Baya Systems CCO, sits down with Ian Ferguson, Vice President of Vertical Markets and Business Development at SiFive, to unpack one of the most important shifts happening in modern computing: AI is no longer just about scaling compute, it’s about orchestrating complexity.

    As architectures fragment across accelerators, chiplets, and custom silicon, the real challenge is no longer building faster chips. it’s turning all of these elements into a cohesive, high-performance system.

    This conversation explores why the industry is moving beyond the traditional “CPU vs GPU” narrative and toward a system-level approach where performance is defined by how effectively compute, memory, interconnect and software work together.

    From the growing momentum behind RISC-V to the rise of heterogeneous compute environments, the discussion highlights a clear trend: the future won’t be defined by a single dominant architecture, but by optimized combinations of technologies tailored to specific workloads.

    That shift introduces a new layer of complexity.

    Key themes explored in this episode include:
    - Why data movement is emerging as the primary constraint in AI systems
    - How efficiency metrics like “tokens per dollar” are reshaping design priorities
    - The shift toward purpose-built architectures across data center, automotive, and edge applications
    - The role of open ecosystems and interoperability in accelerating innovation
    - Why competitive advantage is shifting from individual components to full system design

    If you’re interested in where AI is headed, this is a must-watch conversation on the forces shaping the future of compute and what it takes to stay ahead.
    Show More Show Less
    49 mins
  • Inside the AI Bottleneck: Data Movement, Chiplets, and System Scaling
    Mar 27 2026
    For the last decade AI has been driven by one thing, more compute: bigger models, more accelerators, higher throughput.

    But as NVIDIA’s Jensen Huang recently highlighted at GTC, the industry is hitting a different kind of wall, one that hasn’t received nearly as much attention.

    The real constraint is no longer just compute. It’s data movement.

    To its credit, Nvidia has pushed this frontier with innovations like NVLink Fusion, and continued investment in connectivity AI dataflow architectures. But the challenge is bigger than any one company.

    As AI systems scale to hundreds - and even thousands - of processors, performance is increasingly defined by the ability to efficiently move, synchronize, and manage data across increasingly distributed architectures that can orchestrate data across chiplets, nodes, and entire racks.

    In this episode of Tech Threads, we bring together a panel of deeply experienced technologists, architects and leaders from companies like Intel, Arm, Altera, Texas Instruments, and Arteris - individuals who have helped shape modern compute, interconnect standards, and system architecture.

    Together they explore what is really changing beneath the surface: why traditional scaling approaches are breaking down, how coherent interconnects and network-on-chip architectures are evolving, and why system-level thinking is becoming essential.

    They also dive into the growing complexity introduced by chiplet-based designs, heterogeneous compute, and distributed memory systems and what it takes to maintain performance, efficiency, and programmability at scale.

    This is not just a technology shift, it’s an architectural reset.

    If you’re building or thinking about next-generation AI systems, this conversation gets to the heart of what matters next.
    Show More Show Less
    54 mins
  • From Arduino to AI Infrastructure: Scaling the Next Wave of Computingl
    Jan 21 2026
    What do Arduino, IoT, edge AI, and Nvidia-era data centers have in common? They all depend on ecosystems: people, platforms, and momentum.
    In this episode, Sander Arts joins Baya’s Chief Commercial Officer and Tech Threads host Nandan Nayampally for a wide-ranging, candid conversation on how breakthrough technology actually scales.
    Sander brings a rare operator’s perspective shaped by 25+ years scaling global technology companies across semiconductors, enterprise software, and AI. As the founder of Orange Tulip Consultancy, he serves as a Fractional CMO and growth advisor, helping leadership teams turn deep technology into real-world adoption.
    Together, Nandan and Sander explore how communities, developer access, and platform ecosystems turn deep technology into real-world adoption, and why timing and openness can be just as critical as technical performance.
    The discussion moves from the maker-era lessons of Arduino and IoT to today’s AI infrastructure boom, unpacking why scaling “long-tail” customers is both an opportunity and an operational challenge, and how edge AI and data center markets are evolving in parallel. They also debate the art of “opening the kimono,” how standardization and middleware shape adoption, and why capital intensity and speed often determine whether innovation stays local or becomes global.
    They close by looking ahead at emerging trends like robotics, neo-cloud architectures, quantum with real customers, and the networking backbone powering AI’s future, and how these shifts intersect with Baya’s view of increasingly complex, software-driven systems.
    Show More Show Less
    43 mins
  • The Architecture of "Open" Intelligence
    Oct 14 2025
    In this episode of TechThreads: Weaving the Intelligent Future, legendary chip architect Jim Keller joins Nandan Nayampally, Baya Systems’ Chief Commercial Officer, to explore how openness, modularity, and simplicity are redefining the architecture of intelligence.

    From his early work on Apple’s A4 through A7 processors to today’s AI-driven computing revolution, Jim shares how every leap in performance has come from breaking complexity down into composable, modular layers. Referencing The Systems Bible, he explains why “you can’t fix broken complicated systems”, and why the only path forward is to design simpler components that can scale and evolve together.

    The conversation spans:
    - The AI paradigm shift. Why traditional compute models no longer scale.
    - How data movement, not just compute, has become the new frontier.
    - The rise of chiplets and software-driven fabrics for scalable design.
    - The power of open ecosystems like RISC-V and OCA to democratize AI innovation.
    - Building a path toward sovereign and collaborative compute platforms worldwide.

    Listen as Jim Keller unpacks the engineering philosophy behind building open, intelligent systems and what it means for the future of AI and computing at scale.
    Show More Show Less
    44 mins
  • AI from Edge to Cloud: Hype vs Reality
    Aug 14 2025
    In this episode of Tech Threads, Nandan Nayampally sits down with Sally Ward-Foxton (EE Times) and Dr. Ian Cutress (More Than Moore) for an unfiltered look at the state of AI, from the far edge to hyperscale data centers.

    Ahead of the recording, we asked our LinkedIn followers to weigh in on some of the biggest questions in AI today, from bottlenecks in system design to the future of GPUs. Those poll results are revealed and discussed in the episode, bringing your insights directly into the conversation.

    The discussion covers where the real bottlenecks lie in AI system design, whether “AI at the edge” is living up to the hype, and if GPUs will continue to dominate or give way to new architectures. With insights on hardware-software co-design, open vs proprietary ecosystems, and the realities of scaling AI infrastructure, this episode blends deep technical perspective with candid industry observations.

    If you care about AI performance, power efficiency, and what’s next in compute architecture, this is a discussion you won’t want to miss.
    Show More Show Less
    48 mins
  • Edge AI Revolution: Scaling Intelligence from the Network Edge to the Data Center
    Jul 15 2025
    In this episode of Tech Threads: Weaving the Intelligent Future, Baya Systems' CCO Nandan Nayampally welcomes Fabrizio Del Maffeo, founder and CEO of Axelera AI, one of Europe’s most promising AI semiconductor startups. The conversation opens with a sharp look at the growing shift from cloud to edge AI, exploring the power, cost, latency constraints, and more importantly, the regional and use-case considerations that are reshaping how and where intelligence is deployed.
    The discussion covers strategies for deploying AI at the network edge, adapting to rapidly evolving workloads, and leveraging digital in-memory computing to enable low-power, high-throughput inference acceleration. It also delves into the future of chiplet-based design, the role of open and programmable hardware, and broader efforts to democratize compute. With shared perspectives on “scale within” and scalable system architectures, this episode offers a compelling view into the future of distributed AI.
    Show More Show Less
    40 mins
  • Beyond the Bottlenecks: A Vision for Intelligent Systems
    Jun 13 2025
    In this episode of Tech Threads: Weaving the Intelligent Future, host Nandan Nayampally welcomes Rochan Sankar, AI infrastructure pioneer and founder of Enfabrica, for a deep dive into the next frontier of intelligent computing. Together, they explore one of the most critical and often overlooked challenges in AI: data movement, and its impact at every level from cloud to end device. The discussion explores new system architecture, key’s to scalability, optical interconnects, chiplet innovation and impacts, and a whole lot more. From startup lessons to bold predictions, this conversation delivers candid insights and forward-looking perspectives on what it will take to build truly scalable AI systems.

    Whether you're an engineer, architect, or simply curious about the technologies shaping tomorrow’s computing landscape, this episode delivers both substance and inspiration. Listen in to discover what’s redefining performance at the infrastructure layer—and what’s coming next.
    Show More Show Less
    44 mins
  • Scaling AI: Simplicity Meets Compute
    May 14 2025
    In this riveting episode of Baya Systems' Tech Threads podcast, tech luminaries Raja Koduri, founder and CEO of Mihiri AI and Dr. Sailesh Kumar, founder and CEO of Baya Systems unpack the explosive growth of the intelligent compute era where AI demands unprecedented scalability. From Koduri’s trailblazing accelerated computing work at Apple, Intel, and AMD to Kumar’s innovations in software-defined networking, they reveal how today’s supply-constrained systems are evolving through chiplet technology and simplified architectures to meet the tripling annual needs of AI models.

    Koduri stresses simplicity in design, advocating for software abstractions and hardware that hide complexity to enable seamless scaling. Kumar details the need for configurable, software-defined fabrics to support heterogeneous compute workloads.

    From the elegance of “simple” scaling solutions to the critical role of software-hardware co-design, this episode is a masterclass in understanding the tech that powers our world—and what’s coming next. This episode is a must-listen for tech enthusiasts and professionals alike, offering a thrilling glimpse into the trillion-agent AI future and the scalable, boundary-pushing innovations shaping tomorrow’s world.

    Learn more about Baya Systems' software-defined fabrics solutions for scalable AI at bayasystems.com
    Show More Show Less
    47 mins