AI lab by information labs cover art

AI lab by information labs

AI lab by information labs

Written by: information labs
Listen for free

About this listen

AI lab podcast, "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.CC BY 4.0 license Science
Episodes
  • AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems
    Dec 10 2025

    🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

    📌 TL;DR Highlights

    ⏲️[00:00] Intro

    ⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

    ⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

    ⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

    ⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

    ⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

    ⏲️[16:45] Wrap-up & Outro

    💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

    🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

    💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

    🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

    💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

    🗣️ “Too much transparency can mislead just as much as too little.”

    💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

    🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

    💭 Q5 - What is the one core idea you want policymakers to take away from your research?

    🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

    📌 About Our Guests

    🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

    🌐 linkedin.com/in/joan-barata-a649876

    Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

    #AI #artificialintelligence #generativeAI

    Show More Show Less
    17 mins
  • AI lab TL;DR | Aline Larroyed - The Fallacy Of The File
    Nov 27 2025

    🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large language models. Moving from the technical realities of parameter optimisation to the policy dangers of doctrinal drift, they explore how misleading language can distort copyright debates, inflate compliance burdens, and threaten Europe’s research and innovation ecosystem.

    📌 TL;DR Highlights

    ⏲️[00:00] Intro

    ⏲️[00:33] Q1-In your view, what is the biggest misunderstanding behind the ‘memorisation’ metaphor in AI, and why is this framing so problematic when applied to copyright law?

    ⏲️[02:00] Q2-what actually happens inside a large language model during training, and explain why this process should not be treated as copyright ‘reproduction’?

    ⏲️[03:32] Q3-What do you see as the main legal, economic, and innovation risks for Europe if policymakers continue relying on the memorisation metaphor when designing AI regulation?

    ⏲️[04:39] Q4-If ‘memorisation’ is the wrong frame, what alternative concepts or policy focus areas should policymakers adopt to regulate AI more accurately and effectively?

    ⏲️[06:28] Q5-What is the one core idea you want policymakers to take away from your research?

    ⏲️[07:32] Wrap-up & Outro

    💭 Q1 - In your view, what is the biggest misunderstanding behind the ‘memorisation’ metaphor in AI, and why is this framing so problematic when applied to copyright law?

    🗣️ “A large language model is not a filing cabinet full of copyrighted material.”

    💭 Q2 - what actually happens inside a large language model during training, and explain why this process should not be treated as copyright ‘reproduction’?

    🗣️ "Training is parameter optimisation, not the storage of protected expression.”

    💭 Q3 - What do you see as the main legal, economic, and innovation risks for Europe if policymakers continue relying on the memorisation metaphor when designing AI regulation?

    🗣️ "Stretching the reproduction right to cover statistical learning would be disastrous for research and innovation in Europe.”

    💭 Q4 - If ‘memorisation’ is the wrong frame, what alternative concepts or policy focus areas should policymakers adopt to regulate AI more accurately and effectively?

    🗣️ "We need mechanism-aware regulation, not metaphor-driven lawmaking.”

    💭 Q5 - What is the one core idea you want policymakers to take away from your research?

    🗣️ "Don’t write rules for filing cabinets when we are dealing with statistical models.”

    📌 About Our Guests

    🎙️ Aline Larroyed | Dublin City University

    🌐 linkedin.com/in/aline-l-624a3655

    🌐 Article | The Fallacy Of The File: How The Memorisation Metaphor Misguides Copyright Law And Stifles AI Innovation

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5782882

    Aline Larroyed is a postdoctoral researcher at Dublin City University and holds a PhD in International Law with a background in linguistics. She brings 20 years of experience in human rights, intellectual property, and international regulation, and is a member of the Institute for Globalization and International Regulation at Maastricht University and the COST LITHME network.

    #AI #ArtificialIntelligence #GenerativeAI

    Show More Show Less
    8 mins
  • AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence
    May 26 2025
    🔍 In this TL;DR episode, Anna and Nate unpack why calling AI outputs “hallucinations” misses the mark—and introduce “AI Mirage” as a sharper, more accurate metaphor. From scoring alternative terms to sparking social media debates, they show how language shapes our assumptions, trust, and agency in the age of generative AI. The takeaway: choosing the right words is a hopeful act of shaping our AI future.📌 TL;DR Highlights⏲️[00:00] Intro⏲️[00:42] Q1-What’s wrong with the term “AI hallucination” — and how does “mirage” help?⏲️[05:30] Q2-Why did “mirage” stand out among 80+ alternatives?⏲️[10:30] Q3-How should this shift in language impact educators, journalists, or policymakers?⏲️[10:10] Wrap-up & Outro💭 Q1 - What’s wrong with the term “AI hallucination” — and how does “mirage” help?🗣️ "There's no reason to think that AI is experiencing something, that it has a belief about what's real or what's not." (Anna)🗣️ "It anthropomorphizes AI, and it also misleads us to think that this might be a technically fixable problem—as a person might take medication for mental illness—that maybe AI could be induced not to hallucinate." (Anna)🗣️ "I did come up with my own criteria, which included: not implying that AI has intent or consciousness, implying that outputs don't match reality in some way, showing a connection to the patterns in the training data ideally, but also showing that AI can go beyond training data." (Anna)🗣️ "The words used to describe different technologies can sometimes steer people in directions in relation to them that aren’t really beneficial." (Nate)🗣️ "Just like how a desert produces a mirage under certain circumstances... It’s the same with AI. There’s a system at play... that can produce a certain situation, which can then be perceived by an observer as possibly misleading, inaccurate, or counterfactual." (Nate)💭 Q2 - Why did “mirage” stand out among 80+ alternatives?🗣️ "I actually went through and rated each term numerically on each of those criteria and did kind of a simple averaging of that to see which terms scored the highest." (Anna)🗣️ "We decided that it was misleading to say 'Data Mirage,' because people would think the problem was in the data... and that’s not the case. So we ditched the 'data' part and just landed on 'AI Mirage'." (Anna)🗣️ "We kind of realized, as we were discussing 'Mirage,' how important it was that it centered human judgment—and that wasn’t initially one of the criteria." (Anna)🗣️ "Even when we know how it works and we know it’s wrong, sometimes there’s still that temptation... to say, 'Wow, I think it really nailed it this time.'" (Anna)🗣️ "We really wanted to encourage this ongoing interrogation of the metaphors we use and the language we use, and how they're affecting our relationship with AI." (Anna)💭 Q3 - How should this shift in language impact educators, journalists, or policymakers?🗣️ "How do we build systems and train ourselves to think about how we want to interact with them, stay in control, and still be the ones making judgments and choices?" (Anna)🗣️ "We are participating in shaping that future, and it’s not over. We don’t have to just capitulate and accept the term that’s used. We don’t have to accept someone’s vision of what AGI is going to be in five years. We’re all shaping this." (Anna)🗣️ "In a way, it doesn’t really matter what term you end up with—just asking the question of whether 'hallucination' is a useful or accurate term can spark a really interesting and valuable discussion." (Nate)🗣️ "There are many systemic issues we should be thinking about with AI. But I also believe in the power of the damning—of the words we use to talk about it—as being an important factor in all that." (Nate)🗣️ "It’s useful for us as humans to have different words for those outputs we deem unexpected, incorrect, or counterfactual. It helps us to talk about when an AI mirages rather than dumping all its outputs into one big undifferentiated basket." (Nate)📌 About Our Guests🎙️ Anna Mills | College of Marin🌐 Anna Millslinkedin.com/in/anna-mills-oer 🎙️ Nate Angell | Nudgital🌐 Nate Angelllinkedin.com/in/nateangell 🌐 Article | Are We Tripping? The Mirage of AI Hallucinationshttps://papers.ssrn.com/sol3/papers.cfm?abstract_id=5127162 Anna is a college writing instructor and a leading advocate for AI literacy in education, building on her combined teaching experience and technical knowledge. Nate is the founder of Nudgital, a company that builds sustainability and growth at the intersection of communications, community, technology, and strategy. #AI #ArtificialIntelligence #GenerativeAI
    Show More Show Less
    21 mins
No reviews yet