The $100 Billion AI Mistake: Why Small Language Models Are Taking Over
Failed to add items
Sorry, we are unable to add the item because your shopping basket is already at capacity.
Add to cart failed.
Please try again later
Add to wishlist failed.
Please try again later
Remove from wishlist failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
Brooks dives into the AI industry's biggest gamble - hundreds of billions being poured into data centers that may be obsolete before they're even built. Featuring insights from Harvard Business Review's latest research on Small Language Models (SLMs), this episode breaks down why specialized AI might crush the generalist giants like ChatGPT and Claude.
Key topics:
- Why Salesforce's AgentForce is failing (less than 5% adoption)
- The doctor analogy: LLMs vs SLMs explained
- Open-source models from China and UAE costing 3% of major players
- How businesses can run AI locally without sending data overseas
- The coming shift from cloud-based to edge computing
Rob Marks joins Brooks to discuss what this means for small businesses and whether American AI dominance is as secure as we think. No BS, just practical insights on where AI is really headed.
No reviews yet