Learn more about AI Infrastructure Field Day 4 here.
As we build out AI infrastructure and applications we need resource efficiency, continuously buying more horsepower cannot go on forever. This episode of the Tech Field Day podcast features Pete Welcher, Gina Rosenthal, Andy Banta, and Alastair Cooke hoping for a more efficient AI future. Large language models are trained using massive farms of GPUs and massive amounts of Internet data, so we expect to use large farms of GPUs and unstructured data to run those LLMs. Those large farms have led to scarcity of GPUs, and now RAM price increases that are impeding businesses building their own large AI infrastructure. Task-specific AIs, that use more efficient, task-specific models should be the future of Agentic AI and AI embedded in applications. More efficient and targeted AI may be the only way to get business value from the investment, especially in resource constrained edge environments. Does every AI problem need a twenty billion parameter model? More mature use of LLMs and AI will focus on reducing the cost of delivering inference to applications, your staff, and your customers.
Panelists:
Gina Rosenthal, Product Marketing Manager
Pete Welcher, Networking Expert
Andy Banta, Storage and Infrastructure Consultant
Hosts:
Tom Hollingsworth, Event Lead for Tech Field Day
Alastair Cooke, Event Lead at Tech Field Day
Stephen Foskett, President and Organizer of Tech Field Day
Follow the Tech Field Day Podcast on X/Twitter or on Bluesky and use the Hashtag #TFDPodcast to join the discussion. Listen to more episodes on the podcast page of the website.
Follow Tech Field Day for more information on upcoming and current event coverage on X/Twitter, on Bluesky, and on LinkedIn, or visit our website.