UNESCO’s Hands-On AI Supervision cover art

UNESCO’s Hands-On AI Supervision

UNESCO’s Hands-On AI Supervision

Written by: UNESCO
Listen for free

About this listen

UNESCO’s Hands-On AI Supervision: Lessons from Practice is a six-episode mini podcast series showcasing concrete lessons from the 2nd Expert Roundtable on AI Supervision, convened by UNESCO. Each episode distils insights from hands-on exercises with leading experts on AI risk mapping, evaluations, red teaming, benchmarking, cybersecurity, and engagement with market actors. Designed for regulators, policymakers, and practitioners, the series explores practical methodologies, emerging challenges, and the institutional capacities needed for effective AI oversight. Through focused conversations with specialists, the series provides accessible, actionable knowledge to strengthen technical readiness and foster ongoing dialogue across the global AI supervision community. Hosted on Ausha. See ausha.co/privacy-policy for more information.UNESCO Science Social Sciences
Episodes
  • Evaluating AI Systems: Metrics, Methods, and Measurement Gaps
    Feb 17 2026

    A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.

    The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.

    Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)
    Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    Show More Show Less
    33 mins
  • Mapping AI Risks: From Principles to Practice
    Feb 10 2026

    This episode explores how supervisory authorities can translate high-level AI risk principles into practical, operational risk-mapping processes. Nathalie Cohen discusses evaluation frameworks, data considerations, and real-world challenges identified during the roundtable exercise, providing regulators with concrete steps for structuring risk identification and prioritisation.

    Speaker: Nathalie Cohen (OECD)
    Interviewer: Max Kendrick, AI Strategy Coordinator & Senior Advisor, Office of the Director General, UNESCO



    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    Show More Show Less
    36 mins
  • AI Safety & Benchmarking: Building Trustworthy Evaluation Ecosystems
    Feb 1 2026

    Effective AI supervision requires reliable benchmarking ecosystems. Nicholas Miailhe discusses why benchmarks matter, how they should be constructed, and what regulators need to know about safety evaluations. The conversation highlights emerging international efforts to standardise safety testing and ensure comparability across models.

    Speaker: Nicholas Miailhe (PRISM Eval)

    Interviewer: Doaa Abu Elyounes, Programme Specialist, Ethics of AI Unit, UNESCO


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    Show More Show Less
    35 mins
No reviews yet