Analysis - Regulatory Expectations Around Explainable AI
Failed to add items
Add to cart failed.
Add to wishlist failed.
Remove from wishlist failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
In this Analysis, we examine why explainability, not just accuracy, has become the new standard for AI in fraud and AML, as regulators demand clear, defensible decision-making from financial institutions.
This episode explores:
Why “the model said so” is no longer an acceptable answer under regulatory scrutiny
The difference between transparency and true explainability—and why it matters in audits
The four capabilities that turn AI from a black box into a defensible control
Read the full analysis and related research:
https://www.datavisor.com/blog/regulatory-expectations-around-explainable-ai
Chapters:
00:00 The AI Paradox
02:08 From Rules to Black Boxes
04:16 Legal Risk: Adverse Action & Bias
06:08 Transparency vs. Explainability
08:38 The 4 Requirements of Defensible AI
14:05 Building AI You Can Defend