Episodes

  • #342 Andrew Thangaraj: The $5,000 IIT Degree: Can India Fix Its Broken Education System?
    May 1 2026

    What if the most competitive exam in the world is also the most destructive?

    In this episode of Eye on AI, Craig Smith sits down with Professor Andrew Thangaraj, faculty at the Department of Electrical Engineering at IIT Madras, to explore how one of India's most prestigious institutions is quietly dismantling the system it helped build.

    Andrew lays out the honest reality of higher education in India. Two and a half crore kids reach college age every year. Only 90 lakh make it to college. And the IITs, the most coveted institutions in the country, take just 17,000. The competition to reach those seats has become so extreme that students are losing their childhoods, their development is stunted, and even those who make it through are often unemployable because the system rewards knowledge over skills.

    Andrew walks through exactly how IIT Madras is responding. A full, IIT-branded undergraduate degree in data science delivered entirely online for under five lakhs, roughly $5,000. No JEE required. No elite school background needed. Just a 10th standard foundation and the willingness to do the work. The program flips the traditional model, putting hands-on skills and real projects before theory, building in multiple exit points for students who need to start earning before they finish, and scaling to over 40,000 active students through a hybrid of faculty-recorded lectures, full-time instructors, and a remarkably active student community.

    We also get into the bigger picture. Why India's AI talent gap is as much a culture problem as a numbers problem. Whether India can leapfrog into AI leadership the way China did after rebuilding its research ecosystem. Where AI tools are already being tested inside the program and where they still fall short. And how AI deployed in Indian languages, in agriculture, and in the courts could drive the kind of societal change that no corporate productivity tool ever will.

    Subscribe for more conversations with the people shaping the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Introduction and Andrew Thangaraj's Background

    (01:29) India's Higher Education Bottleneck

    (03:45) Designing a $5,000 IIT Degree

    (09:27) Why Graduates Still Lack Skills

    (12:31) When the Program Started and How It Got Approved

    (13:56) Program Structure, Diplomas and Multiple Exit Points

    (17:52) Who the Program Reaches and Surprising Student Stories

    (24:57) Older Students, Working Professionals and International Enrollment

    (29:55) Can India Leapfrog in AI

    (34:03) Data Centers, Power and Infrastructure Gaps

    (40:57) How Involved Are the IITs in India's AI Mission

    (46:00) AI for Languages, Farms and Courts

    Show More Show Less
    49 mins
  • #341 Celia Merzbacher: Beyond the Buzzword: The Real State of Quantum Computing, Sensing, and AI in 2025
    Apr 30 2026

    What does the quantum industry actually look like right now, beneath all the hype?

    In this episode of Eye on AI, Craig Smith sits down with Celia Merzbacher, Executive Director of the Quantum Economic Development Consortium (QED-C), to break down the real state of quantum technology in 2025. From market growth and enterprise readiness to the growing intersection with AI, Celia brings a grounded insider perspective on where the industry stands and what comes next.

    Celia explains why the quantum market is growing faster than even the companies inside it predicted, with revenues rising roughly 27% year over year and actual numbers consistently beating forecasts. She also makes clear that the future is not quantum replacing classical computers. It is hybrid systems combining both to solve problems that simply cannot be solved today, with early use cases already emerging in pharmaceuticals, energy, finance, and defense.

    We also get into quantum sensing, the most underrated corner of the quantum world. From biomedical imaging already in clinical trials to quantum clocks powering GPS and financial transaction timestamping, sensing is already partially commercialized and quietly reshaping industries most people have never connected to quantum at all.

    Finally, Celia addresses the AI question directly. Will AI replace quantum? No. The two are complementary. AI is already accelerating quantum hardware design and algorithm discovery, and quantum may eventually improve how AI systems are trained. She closes with a clear message for enterprise leaders: the transition to quantum will not be a migration. It will be a paradigm shift, and the time to start preparing is now.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    Timestamp:

    (00:00) Introduction: What Is QED-C and Why Does It Exist?

    (01:57) Celia Merzbacher on Her Background and Role

    (04:32) Annual Market Survey: How Fast Is Quantum Actually Growing?

    (09:10) Where Quantum Revenue Is Coming From Today

    (11:11) Timeline and the Race to Utility-Scale Quantum Computing

    (13:23) Early Use Cases: Pharma, Energy, Finance and Hybrid Computing

    (16:14) What Is Quantum Sensing and Why It Matters

    (20:39) The Three Pillars: Hardware, Error Correction and Algorithms

    (27:40) How Enterprises Should Start Preparing for Quantum Now (38:39) AI and Quantum: Allies Not Competitors

    Show More Show Less
    45 mins
  • #340 Steffen Cruz: Training AI Without Data Centres
    Apr 29 2026

    What if you could train a frontier AI model without building a single data centre?

    In this episode of Eye on AI, Craig Smith sits down with Steffen Cruz, co-founder and CTO of Macrocosmos, to explore a radical alternative to the way AI models are built today. Instead of billion-dollar GPU warehouses, Steffen is training large language models using idle compute from devices distributed around the world, coordinated through the Bittensor blockchain.

    Steffen breaks down why the centralised data centre model is heading toward a wall. Projects like Stargate and Colossus cost tens of billions of dollars, and as appetite for larger models grows, the economics simply stop making sense. He explains how distributed training flips this on its head, tapping into surplus energy, underutilised GPUs, and even consumer devices like Mac Minis to train models at a fraction of the cost.

    We also get into IOTA, Macrocosmos's flagship technology, an orchestration layer that takes compute nodes scattered across the globe and makes them act like a single supercomputer. No single device runs the full model. Instead, each one carries a small slice, a technique called model parallelism, and together they can train frontier-scale models that would otherwise be out of reach for startups, researchers, and enterprises.

    Finally, Steffen shares what he's building toward: 70 billion parameter models trained at 10 to 20 percent of centralised costs, a two-sided marketplace for compute, and a future where anyone with a spare GPU or Mac Mini can earn passive income while contributing to the democratisation of AI.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    Timestamp:

    (00:00) Introduction: The Problem With Blockchain AI Projects

    (06:39) Meet Steffen Cruz: From Subatomic Physics to Decentralised AI

    (09:16) What Is a Bittensor? The Blockchain Built for AI

    (11:53) How the Blockchain Actually Works: Registry, Clock, and Rewards

    (15:08) Why Data Centres Are Hitting a Wall

    (22:01) Distributed Training vs Federated Learning: What's the Difference?

    (27:47) Train at Home: Turning Your Mac Mini Into a Passive Income Machine

    (32:49) IOTA Explained: Building a Global Supercomputer From Spare Parts

    (39:43) How the Network Scales: From 256 Nodes to Limitless Compute

    (44:39) The Road Ahead: 70B Parameter Models and the Future of Affordable A

    Show More Show Less
    46 mins
  • #339 Eamonn Maguire: Your Child Has a Data Profile Before They're Born
    Apr 28 2026

    What if your child already has a data profile, and they haven't even been born yet?

    In this episode of Eye on AI, Craig Smith sits down with Eamonn Maguire, Director of Engineering for AI and ML at Proton, to explore one of the most urgent and underappreciated questions in the age of AI: who owns your data, who is building a profile on you, and what can actually be done about it?

    Eamonn brings a rare combination of depth and range to this conversation. With a PhD from Oxford, a postdoc at CERN, and years at Facebook engineering ML systems to detect internal and external threats, he now leads Proton's AI efforts, including Lumo, their end-to-end encrypted alternative to ChatGPT. He makes a compelling case that the surveillance economy is not just a privacy problem but a behavioral one, where the systems profiling you are not only observing who you are but actively shaping who you become.

    We get into how just three data points are enough for advertisers to infer your age, political leanings, religion, and spending habits. We discuss why trusting mainstream AI platforms with sensitive data is a structural problem, not just a policy one, and why the AI labs with the best models got there by acquiring the most data, often with little regard for copyright law. Eamonn also breaks down the difference between truly open models and open washing, and explains how Proton builds AI that is genuinely private by design, with local indexing, encrypted memory, and user-controlled data sharing.

    Then there is Born Private, Proton's initiative to give children a private digital identity from birth. It sounds simple on the surface, but the conversation it opens up is anything but. Data collection on your child begins before they are born, the moment a parent emails a gynecologist or a fertility clinic. Eamonn argues that until we start thinking about privacy the way we think about other rights, from the very beginning, the surveillance machine will always have a head start.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on AI on X: https://x.com/EyeOn_AI

    Timestamp:

    (00:00) Introduction and Meet Eamonn Maguire

    (00:38) From Bioinformatics to CERN to Facebook: Eamonn's Career Arc

    (05:23) How Proton Started in the CERN Cafeteria

    (09:23) What Mainstream AI Platforms Actually Do With Your Data

    (13:00) Copyright, Training Data, and Why Big Labs Can't Be Trusted

    (15:10) Open Models vs Open Washing: What Truly Open AI Looks Like

    (24:22) How Lumo Works: Encrypted Memory and No Data Leakage

    (31:18) Born Private: Reserving a Private Email Address at Birth

    (33:00) How Data Profiling Starts Before Your Child Is Born

    (34:26) How Three Data Points Become a Complete Profile

    (39:07) Molly Russell and the Consequences of Algorithmic Profiling

    (53:55) The Full Proton Ecosystem: Mail, VPN, Drive, Lumo, and Workspace

    Show More Show Less
    46 mins
  • #338 Amith Singhee: Can India Catch Up in AI? IBM's Amith Singhee on What It Will Take
    Apr 24 2026

    What if the country that trains the world's engineers finally built the infrastructure to match its talent?

    In this episode of Eye on AI, Craig Smith sits down with Amith Singhee, Director of IBM Research India and CTO of IBM India and South Asia, to explore where India actually stands in the global AI race and what it will take to close the gap.

    Amith gives an honest, ground-level assessment of why India has been slow to compete. The talent has always been there. But until recently, the investment, the compute infrastructure, and the institutional intent hadn't come together in a sustained, coordinated way. That's changing, and Amith explains exactly what's different now.

    He walks through IBM Research India's 27-year presence in the country, the research it's doing on foundation models, hybrid cloud AI deployment, agentic systems, and quantum computing. He also explains why building AI from India doesn't just help India. Working with less data, less compute, and more linguistic diversity forces better engineering and makes IBM's models more generalizable for the entire world.

    We also get deep into the technical frontier. Why catastrophic forgetting is one of the key unsolved problems standing between current AI and anything more capable. How IBM is already shipping continual learning in practice through its COBOL modernization tools, helping enterprises decode decades of legacy code before the engineers who wrote it are gone. And why agentic AI, for all the hype, still has a mountain of unglamorous enterprise engineering left to climb before it becomes truly reliable.

    Plus, what Amith would tell an 18-year-old engineer in India today about what skills will actually matter in an AI-driven world.

    Subscribe for more conversations with the people shaping the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Introduction and Amith Singhee's Background

    (06:26) Why IBM Set Up Research in India

    (11:45) Can India Compete in AI

    (15:18) How IBM Collaborates With Indian Universities

    (19:25) Why India Has Been Slow in AI

    (24:50) IBM's Hybrid Cloud AI Research Focus

    (27:34) How Data Scarcity in India Makes Better AI

    (31:18) Fine-Tuning Models Without Losing General Knowledge

    (35:03) Continual Learning and Catastrophic Forgetting

    (38:25) COBOL and Legacy Code Modernization

    (42:11) Agentic AI Hype vs Enterprise Reality

    (48:09) What Young Engineers Should Study Today

    Show More Show Less
    47 mins
  • #337 Debdas Sen: Why AI Without ROI Will Die (Again)
    Apr 23 2026

    What does it actually take to prove that AI delivers real value in the industries that keep the world running?

    In this episode of Eye on AI, Craig Smith sits down with Debdas Sen, CEO of TCG Digital and Joint Managing Director of Lummus Digital, to explore what serious enterprise AI looks like when it is applied to some of the most complex, high-stakes problems on the planet. Problems like compressing years of catalyst research into weeks, predicting refinery failures before they happen, and accelerating drug development timelines that could determine how long a life-saving medicine takes to reach patients.

    Debdas has spent nearly 30 years in data and AI, living through every hype cycle from the data warehousing era of 1997 to today's agentic revolution. He makes a compelling case that the AI community has one defining job right now: prove the ROI, or risk another AI winter.

    We also get into what makes TCG Digital's platform mcube™ different. It is not a horizontal tool. It is a domain-first, agentic AI ecosystem built for the kinds of massive, multi-variable problems that horizontal platforms cannot touch. Debdas breaks down how mcube™ bridges legacy enterprise infrastructure with cutting-edge agentic systems, why hybrid modeling beats pure AI in energy and life sciences, and how the platform keeps private enterprise data protected while still drawing on the best of what public LLMs have to offer.

    Finally, Debdas shares where he sees the industry heading next, a future where agents from different providers can reason together in a neutral space, where inference and reasoning keep improving, and where the companies that go deepest into domain will pull furthest ahead.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on AI on X: https://x.com/EyeOn_AI

    TCG Digital Website: https://www.tcgdigital.com/

    TCG Digital on LinkedIn: https://www.linkedin.com/company/tcgdigital/

    (00:00) Introduction and Meet Debdas Sen

    (01:30) 30 Years in Data and AI: From Data Warehousing to Agentic Systems

    (03:02) What TCG Digital Actually Does (04:32) Inside mcube™: How the Platform Works

    (10:06) Domain vs Horizontal: Why Specificity Wins in Enterprise AI

    (18:29) Catalyst R&D: Collapsing 12 Months of Research Into One

    (30:38) Predicting Plant Failures Before They Happen

    (36:51) Solving the Trust and Hallucination Problem in Enterprise AI

    (44:51) The Six-Layer Architecture of mcube™

    (47:05) What Is Genuinely New About Agentic AI

    (49:22) What Young People Should Study to Work in Serious AI

    (53:14) Velocity to Value: Why ROI Must Be Tracked From Day One

    Show More Show Less
    51 mins
  • #336 Professor Mausam: Why India Is Losing the AI Race and What It Will Take to Catch Up
    Apr 20 2026

    What if the country that produces the world's top AI talent finally figured out how to keep it?

    In this episode of Eye on AI, Craig Smith sits down with Professor Mausam, one of India's leading AI researchers, AAAI Fellow, and founding head of the Yardi School of Artificial Intelligence at IIT Delhi, to get an honest and unflinching diagnosis of why India has fallen so far behind the US and China in artificial intelligence and what it will actually take to close that gap.

    Mausam breaks down the structural story behind India's deficit. A pipeline of world-class students that gets exported abroad the moment it graduates. A professor shortage so severe that IIT Delhi's entire School of AI has hired only five new faculty members in five years. A government AI mission with the right instincts but not enough speed or boldness. And a brain drain made worse by the very thing India is proud of, its English fluency, which makes its talent the easiest in the world to absorb and the hardest to bring back.

    Mausam walks through the full picture. How China built its research dominance not through students but through aggressively repatriating senior researchers with real salaries, real lab resources, and real authority to build research cultures from scratch. Why the AlexNet moment in 2012 was actually an equalizer that gave China's fledgling ecosystem a surprise advantage over more established Western research groups. How India's JEE coaching culture and IIT bottleneck are symptoms of a scarcity of quality institutions rather than a broken exam. What the government's AI mission is getting right on compute, data, and sectoral focus, and where the critical gaps remain. And why Mausam believes that bringing one hundred top professors back to India would do more for the country's AI future than any single government program or funding initiative.

    We also get into the harder questions. Whether AI degrees belong at the undergraduate level or should sit on top of a computer science foundation. Why Mausam no longer holds an optimistic view on AI's impact on software jobs and why he thinks Geoff Hinton's point about plumbers has merit. And what it would actually take for a democracy of 1.4 billion people to stop training the world's AI leaders and start keeping them.

    Subscribe for more conversations with the researchers, builders, and policymakers shaping the future of artificial intelligence.

    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Introduction: India's AI Gap and Professor Mausam's Background
    (02:30) Building the Yardi School of AI at IIT Delhi
    (07:44) How Far China Has Pulled Ahead in AI Research
    (12:55) Why India Could Not Follow China's Playbook
    (29:18) The JEE System, Coaching Culture, and the IIT Bottleneck
    (30:37) AI Degrees, Job Market Realities, and the Future of Work
    (44:18) The Real Problem Is Professors, Not Students
    (48:07) Big Tech Labs in India: Helpful but Not at Scale
    (51:46) The Government AI Mission: Progress and Gaps
    (55:20) The Compute and Data Infrastructure Problem
    (59:54) Can India Close the Gap Before It Is Too Late

    Show More Show Less
    1 hr
  • #335 Sriram Raghavan: Why IBM Is Betting Everything on Small AI Models
    Apr 19 2026

    Why IBM Is Betting Everything on Small AI Models

    In this episode of Eye on AI, Craig Smith sits down with Sriram Raghavan, Vice President of AI at IBM Research, to explore one of the most important debates in enterprise AI right now. Do you actually need a massive model to get world class results? IBM's answer is no, and Sriram breaks down exactly why.

    Sriram explains why IBM chose to train its Granite models directly using reinforcement learning rather than distilling from larger models like most of the industry. The reason goes beyond performance. It comes down to data lineage, safety alignment, and a belief that small, efficient models are the only sustainable path for enterprises running AI across hybrid cloud environments.

    We get into the full technical stack behind that bet. How data quality has replaced model size as the real competitive advantage. Why parameter count is becoming the wrong metric entirely. How IBM's inference time scaling techniques allow an 8 billion parameter model to match the performance of GPT-4o and Claude 3.5 on code and math benchmarks. And why IBM is pioneering a new concept called Generative Computing, which treats AI models not as prompt receivers but as programmable computing elements with runtimes, modular LoRA adapters, and proper programming abstractions.

    Sriram also shares where IBM Research is headed next, including breakthroughs in continuous learning, agent orchestration, and making unstructured enterprise data actually usable at scale.

    Subscribe for more conversations with the people building the future of AI and emerging technology.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Why IBM Skips Distillation and Trains Small Models Directly

    (04:50) Did We Even Need Giant AI Models in the First Place?

    (08:12) How Data Quality Became the New Competitive Moat

    (11:54) Why Parameter Count Is the Wrong Way to Measure a Model

    (15:36) Reinforcement Learning Without Losing Broad Capabilities

    (22:05) Inference Time Scaling: Getting Big Model Results From Small Models

    (28:12) Generative Computing: Treating AI as a Programming Element

    (36:40) Why IBM Open Sources and How Small Models Make It Sustainable

    (41:25) The Path to Continuous Learning Without Rewriting Weights

    (51:00) IBM's Full Roadmap: Models, Data, and Agents

    Show More Show Less
    1 hr