Episode 37 - NIST Report on Adversarial Machine Learning Taxonomy and Terminology
Failed to add items
Sorry, we are unable to add the item because your shopping basket is already at capacity.
Add to cart failed.
Please try again later
Add to wishlist failed.
Please try again later
Remove from wishlist failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
This NIST report offers a comprehensive exploration of adversarial machine learning (AML), detailing threats against both predictive AI (PredAI) and generative AI (GenAI) systems. It presents a structured taxonomy and terminology of various attacks, categorising them by the AI system properties they target, such as availability, integrity, and privacy, with an additional category for GenAI focusing on misuse enablement. The document outlines the stages of learning vulnerable to attacks and the varying capabilities and knowledge an attacker might possess. Furthermore, it describes existing and potential mitigation strategies to defend against these evolving threats, highlighting the inherent trade-offs and challenges in securing AI systems.
No reviews yet