Beyond the CPU vs GPU War: Rethinking AI Compute at the System Level
Failed to add items
Sorry, we are unable to add the item because your shopping basket is already at capacity.
Add to cart failed.
Please try again later
Add to wishlist failed.
Please try again later
Remove from wishlist failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
As architectures fragment across accelerators, chiplets, and custom silicon, the real challenge is no longer building faster chips. it’s turning all of these elements into a cohesive, high-performance system.
This conversation explores why the industry is moving beyond the traditional “CPU vs GPU” narrative and toward a system-level approach where performance is defined by how effectively compute, memory, interconnect and software work together.
From the growing momentum behind RISC-V to the rise of heterogeneous compute environments, the discussion highlights a clear trend: the future won’t be defined by a single dominant architecture, but by optimized combinations of technologies tailored to specific workloads.
That shift introduces a new layer of complexity.
Key themes explored in this episode include:
- Why data movement is emerging as the primary constraint in AI systems
- How efficiency metrics like “tokens per dollar” are reshaping design priorities
- The shift toward purpose-built architectures across data center, automotive, and edge applications
- The role of open ecosystems and interoperability in accelerating innovation
- Why competitive advantage is shifting from individual components to full system design
If you’re interested in where AI is headed, this is a must-watch conversation on the forces shaping the future of compute and what it takes to stay ahead.
adbl_web_anon_alc_button_suppression_c
No reviews yet