Red Teaming as a Supervisory Tool: Stress-Testing AI Systems
Failed to add items
Sorry, we are unable to add the item because your shopping basket is already at capacity.
Add to cart failed.
Please try again later
Add to wishlist failed.
Please try again later
Remove from wishlist failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
Red teaming is rapidly becoming a critical component of AI oversight. In this episode, Rumman Chowdhury explains how structured adversarial testing can uncover system vulnerabilities, model failures, and misuse pathways.
The discussion focuses on practical red-teaming approaches that supervisory authorities can adopt, even with limited resources.
Speaker: Rumman Chowdhury (Human Intelligence)
Interviewer: Mirela Kmetic-Marceau, Project Consultant, Ethics of AI Unit, UNESCO
Hosted on Ausha. See ausha.co/privacy-policy for more information.
No reviews yet