Trust Issues cover art

Trust Issues

Trust Issues

Written by: Ailish McLaughlin
Listen for free

LIMITED TIME OFFER | Get 2 Months for ₹5/month

About this listen

Welcome to Trust Issues, the podcast for people who use AI but don't fully trust it. Each episode, we speak to experts in building and using AI to help you understand how it really works, who's behind it, where it's going and when you can (and can't) trust it. So you can stop second-guessing and start using AI with confidence.Copyright 2026 Ailish McLaughlin Self-Help Social Sciences Success
Episodes
  • AI - magic or maths? A no-jargon guide on how AI actually works.
    Mar 11 2026

    Last week, Florence helped us get our heads around the right mindset for using AI. But there were a lot of words flying around. Agents. LLMs. Machine learning. What do those things actually mean? And more importantly, does it matter?

    This week we're joined by Raji Ramakrishnan, a product leader at Lloyds Banking Group who works on agentic AI observability. Which, yes, is a mouthful. But by the end of this episode, you'll actually know what all of those words mean. And that's kind of the point.

    Raji breaks down the entire AI landscape in a way that finally makes sense. She starts with the basics (AI is not magic, it's maths, data and programming) and walks us through how machines learn using an analogy that anyone who's taught a child flashcards will immediately get. Supervised learning? That's you holding up the flashcard. Unsupervised learning? That's the kid pointing at a cat in the street having figured it out on their own.

    But this episode isn't just a glossary. It's about why understanding this stuff actually matters. Raji makes a compelling case that AI is coming whether you engage with it or not. Your mobile provider, your bank, your electricity company are all already using it. And the more you understand, the better equipped you are to know when to trust it and when to push back.

    We also get into hallucinations (why AI confidently makes stuff up), the difference between generative AI and agentic AI, and what banks are actually doing behind the scenes to make sure AI doesn't go rogue. Spoiler: there are real humans watching.

    In this episode, we cover:

    1. AI, machine learning, deep learning, generative AI, agentic AI: what each one actually means and how they connect
    2. The flashcard analogy: how machines learn in a similar way to children (supervised vs unsupervised learning)
    3. Why AI is a prediction machine, not a truth machine, and why that distinction matters
    4. Hallucinations: what they are, why they happen, and why you should always sense-check
    5. Agentic AI: what changes when AI can take actions on its own, not just generate content
    6. Observability and guardrails: what's actually happening inside banks to keep AI in check
    7. Why jargon is an unnecessary barrier to entry and how to not let it hold you back
    8. The mobile phone analogy: remember buying minutes for your Nokia 3310? AI adoption is on the same trajectory

    Show More Show Less
    1 hr and 9 mins
  • Drunk Interns, Lazy Brains and Knowing When to To use AI
    Feb 25 2026

    This week we're kicking things off with a big question: is AI making us lazy? There's a study from MIT that suggests our brains might be outsourcing more than we realise. And with our brains not fully developing until around age 32, what does it mean that we're handing over so much cognitive work to AI tools before we've even finished cooking?

    To help us figure it out, we're joined by Florence Jumpp, a product leader who's been working in AI and machine learning since 2019. Florence has a background in experimental psychology, and she's built her whole AI career around solving problems rather than obsessing over the tech itself.

    Florence introduces us to her "drunk intern" framework. It's exactly what it sounds like. Think of AI as a capable but overconfident intern who's had a few too many. They'll absolutely get stuff done for you, but you wouldn't send them to the board meeting. And you definitely wouldn't have them work on your hardest problems.

    She also shares her VEER framework for deciding which tasks to hand off to AI: looking at a task's Value, Enjoyment, Effort and Risk to decide whether it's a good one to hand off to AI.

    In this episode, we cover:

    1. Why thinking of AI as a "drunk intern" helps you use it more wisely (and why Florence's is called Jack)
    2. The VEER framework for figuring out what to delegate to AI and what to protect
    3. Cognitive offloading: why your brain has stopped taking notes in personal conversations too
    4. How Florence uses Zapier to never face a post-holiday email wall again
    5. Why doing the hard thing still matters, and how to force yourself to sit with the blank page
    6. The positive feedback loop: using freed-up time to get even better at AI, not just filling it with more work
    7. Why the people who think for themselves are the ones who'll stand out

    About our guest: Florence Jumpp is a product leader specialising in AI and machine learning, with a background in experimental psychology. She brings a neuroscience lens to how we should think about AI's impact on our brains and our work.

    Resources mentioned:

    1. Zapier (zapier.com) for building AI-powered automations
    2. MIT study on AI and cognitive offloading

    Show More Show Less
    1 hr and 24 mins
No reviews yet