Evaluating AI Systems: Metrics, Methods, and Measurement Gaps
Failed to add items
Sorry, we are unable to add the item because your shopping basket is already at capacity.
Add to cart failed.
Please try again later
Add to wishlist failed.
Please try again later
Remove from wishlist failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.
The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.
Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)
Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO
Hosted on Ausha. See ausha.co/privacy-policy for more information.
No reviews yet