Researcher uncovers weaknesses in AI systems (#16) cover art

Researcher uncovers weaknesses in AI systems (#16)

Researcher uncovers weaknesses in AI systems (#16)

Listen for free

View show details

About this listen

Artificial intelligence is advancing rapidly - but so are the risks hidden in its systems. In this episode, Sandra Casalini talks with Maura Pintor, Assistant Professor at the PRA Laboratory of the University of Cagliari, about the unseen vulnerabilities of AI models. Pintor explains why proactive security testing - modeling attacks before they happen - is essential for building trustworthy AI. She discusses how misjudgments and confirmation bias slow progress, and why the fast-paced evolution of generative AI poses challenges for data quality and reliability. Despite the hurdles, Pintor remains optimistic: new approaches in automated testing and validation show that secure AI is possible - if we are ready to challenge our assumptions.
No reviews yet