AI Safety: Constitutional AI vs Human Feedback cover art

AI Safety: Constitutional AI vs Human Feedback

AI Safety: Constitutional AI vs Human Feedback

Listen for free

View show details

About this listen

With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional AI. The technical approaches to maximizing usefulness while minimizing harm. Solo episode on AI alignment.

REFERENCE

OpenAI Model Spec

https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

Anthropic Constitutional AI

https://www.anthropic.com/news/claudes-constitution



To stay in touch, sign up for our newsletter at https://www.superprompt.fm

No reviews yet