Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research
Failed to add items
Add to cart failed.
Add to wishlist failed.
Remove from wishlist failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
Marius Hobbhahn is the CEO and co-founder of Apollo Research. Through a joint research project with OpenAI, his team discovered that as models become more capable, they are developing the ability to hide their true reasoning from human oversight.
Jeffrey Ladish, Executive Director of Palisade Research, talks with Marius about this work. They discuss the difference between hallucination and deliberate deception and the urgent challenge of aligning increasingly capable AI systems.
Links:
Marius’ Twitter: https://twitter.com/mariushobbhahn
Apollo Research Twitter: https://twitter.com/apolloaievals
Apollo Research: https://www.apolloresearch.ai
Palisade Research: https://palisaderesearch.org/
Twitter/X: https://x.com/PalisadeAI
Anti-Scheming Project: https://www.antischeming.ai
Research paper “Stress Testing Deliberative Alignment for Anti-Scheming Training”: https://www.arxiv.org/pdf/2509.15541
Blog posts from OpenAI and Apollo: https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/ https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/