The AI Enterprise Stack with Jorge Torres
Failed to add items
Add to cart failed.
Add to wishlist failed.
Remove from wishlist failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
In this episode of New Research, we talk about what enterprise AI actually looks like once it hits production. Benjamin White is joined by Pawel Czech (Surge) for a deep, practical conversation with Jorge Torres, CEO of MindsDB. Jorge has been building AI systems inside real companies since long before “chat with your data” became the default pitch and he brings a grounded view shaped by messy data, legacy systems, risk, and accountability.
We unpack why “having lots of data” can be as useless as having none, why moving intelligence to the data layer matters more than moving data into new tools, and how enterprises should think about the progression from discovery to analytics to automation without blowing past their own risk limits. Along the way, we cover why SQL still matters, where most companies get stuck, how to design AI systems people can actually trust, and why open source has been a strategic advantage rather than a philosophical choice.
What we cover:
- Why enterprise AI keeps failing after the demo
- “Don’t move the data, move the intelligence” and why that abstraction scales
- Predictions vs generation and why chat isn’t the endgame
- Where search ends and analytics really begins
- When automation is a mistake, not a milestone
- Open source, auditability, and human accountability in AI systems
- What will feel obvious about enterprise AI in five years that most teams are still missing today
If you care about AI that ships, survives contact with real organisations, and delivers value without creating silent failure modes, this one’s for you.
Subscribe to New Research for upcoming episodes, and check out MindsDB’s open source repo and community to go deeper.
Hosted on Acast. See acast.com/privacy for more information.