Data Science With Sam cover art

Data Science With Sam

Data Science With Sam

Written by: Soumava Dey
Listen for free

About this listen

This is an educational podcast focused on bringing academia and industry experts together in a common forum and initiate discussion geared towards data science, artificial intelligence, actuarial science and scientific research.

DISCLAIMER: The views and opinions expressed in this podcast are solely those of the host(s) or guest(s) and do not necessarily reflect the policy or position of any organization. The podcast is intended to provide general educational information and entertainment purposes only.

DataScienceWithSam 2021
Science
Episodes
  • EP 38: The Local AI Stack Nobody Talks About (But Should)
    Apr 22 2026

    You want to run AI locally. You have questions: What hardware do I actually need? Which framework should I use? How much will this cost? What's the realistic performance?

    In this episode, Sam brings back Trent Rossiter, founder of Logical Data Solutions, for a practical walkthrough of building a production-grade local AI lab. Trent has built real systems for enterprise clients, tested frameworks on multiple hardware stacks, and made the hardware choices that matter. This is not theory. This is what actually works.

    WHAT WE COVER:

    ▪ Hardware & Framework Choices: VRAM is the critical metric (not all VRAM is equal — memory throughput matters as much as capacity).

    ▪ Model Architecture & Capability: Mixture of Experts (MoE) lets you fit more power into less VRAM by using fewer active parameters.

    ▪ Real Enterprise Applications: Computer vision for quality assurance on assembly lines. Proprietary data handling without cloud exposure.

    ▪ Your Starter Stack (All Free): Langflow (agentic workflow builder), Goose (MCP-enabled chat), AnythingLLM (with vector stores for RAG), MCP servers (Model Context Protocol — standardised tool integration).

    ▪ Agentic AI & Security: OpenClaw is powerful but controversial — manages email, Telegram, calendars, creates sub-agents. Trent runs it in Docker on an isolated machine for safety. NVIDIA's NemoClaw is the enterprise version (security-first, nothing-allowed-by-default, explicit permissions).

    HARDWARE TRENT MENTIONS:

    NVIDIA DGX Spark — 128GB unified memory, CUDA stack

    Apple MacBook Pro/Mac mini — up to 512GB unified memory, market leader for personal AI

    AMD integrated AI PCs — emerging competitor

    NVIDIA RTX gaming cards (30/40/50/60 series) — high VRAM, high power consumption, complex

    FIND TRENT ROSSITER:

    LinkedIn: https://www.linkedin.com/in/benjamin-trent-rossiter-mba-0157945/

    Logic Data Solutions: https://logicdatasolutions.com/

    Contact: BenjaminRossiter@LogicDataSolutions.com

    Show More Show Less
    41 mins
  • EP 37: Neurons: Future of AI Processing
    Apr 19 2026

    What if the next generation of computers wasn't made of silicon — but of living human neurons? Not simulated neurons, not artificial neural networks inspired by biology, but actual brain cells grown in a lab, connected to electrodes, and used to process information. That's not science fiction anymore. It's happening right now at FinalSpark, a Swiss startup building the world's first remotely accessible biocomputing platform.

    In this episode, Sam talks with Dr. Ewelina Kurtys, a neuroscientist with a PhD in brain imaging and a postdoctoral researcher at King's College London, about how living neurons could revolutionise computing — and why they use one million times less energy than silicon-based AI hardware.

    ▸ WHAT YOU'LL LEARN

    ▪ How FinalSpark was founded in 2014 by Fred Jordan and Martin Kutter — and why they pivoted from digital AI to biological computing when they realised the energy and cost problem was unsolvable with silicon

    ▪ Why 20 watts powers the human brain while silicon-based AI requires megawatts — and what that means for AI's sustainability crisis

    ▪ The difference between neurons as processors (not power sources) — a crucial distinction most people get wrong

    ▪ Why biological neural networks learn continuously while digital systems require full model updates — and what that means for energy efficiency

    ▪ The honest challenge: nobody yet knows exactly how neurons encode information — the biggest scientific hurdle in biocomputing right now

    ▪ How the I/O interface works: electrodes measuring neural spikes, analog-to-digital converters, researchers writing Python code to control neurons remotely

    ▪ The remote access breakthrough: researchers in Tokyo or Bristol can log in and control living neurons in Switzerland in real time via browser

    ▪ Why neurons won't outperform GPUs on speed: biocomputing specialises in efficiency and adaptability, not clock cycles

    ▪ FinalSpark's current stage: they've stored 1 bit of information and are collaborating with 9 universities on fundamental research

    ▪ The cost argument: even at 10× lower price than NVIDIA, biocomputers would still generate billions in profit due to energy and infrastructure savings

    ▪ Bioethics, consent, and regulation: how FinalSpark is working with philosophers now to establish ethical frameworks before biocomputing scales

    ▪ Why human-machine integration is not new: prosthetics, pacemakers, and smartphones are already blending biology and technology

    ▪ The hybrid computing future: silicon, quantum, and biocomputing will coexist, each doing what they do best

    ▪ The real game-changer: cheap, accessible AI for everyone — Ewelina's vision for what biocomputing means for society in 10–20 years.

    ▸ LINKS MENTIONED IN THIS EPISODE

    → Dr. Ewelina Kurtys on LinkedIn

    → Ewelina's Personal Blog & Articles

    → FinalSpark (official website)

    → FinalSpark Neuroplatform (with live neuron view)

    → FinalSpark Team

    → Psync (Ewelina's mental wellness startup)

    → FinalSpark Contact Form

    Show More Show Less
    30 mins
  • EP 36: NVIDIA GTC 2026: Everything That Matters - Recapped
    Mar 28 2026

    Jensen Huang took the stage at SAP Center in San Jose on March 16th and announced that NVIDIA now expects one trillion dollars in chip orders through 2027 — double the forecast from just one year ago. Sam breaks down the five biggest stories from GTC 2026 in under 10 minutes.

    In this episode: the Vera Rubin platform (7 new chips, 5 rack types, built for inference and agentic AI), the Groq 3 LPU (NVIDIA's $20B inference play), NemoClaw (the enterprise-ready agentic AI stack built on viral open-source project OpenClaw), the autonomous vehicle announcement with Uber and seven major automakers, and the Nemotron Coalition for open frontier models.

    Whether you're building in ML, working in data, or just trying to stay ahead of where AI infrastructure is heading - this is your less than 15-minute briefing.

    Links:

    NVIDIA GTC 2026 Press Kit: nvidianews.nvidia.com/online-press-kit/gtc-2026-news

    Jensen Huang Keynote On Demand: nvidia.com/gtc/keynote

    Vera Rubin Press Release: nvidianews.nvidia.com/news/nvidia-vera-rubin-platform

    GTC 2026 Sessions On Demand: nvidia.com/gtc/

    Show More Show Less
    13 mins
No reviews yet