AI Safety: Constitutional AI vs Human Feedback
Failed to add items
Sorry, we are unable to add the item because your shopping basket is already at capacity.
Add to cart failed.
Please try again later
Add to wishlist failed.
Please try again later
Remove from wishlist failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional AI. The technical approaches to maximizing usefulness while minimizing harm. Solo episode on AI alignment.
REFERENCE
OpenAI Model Spec
https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview
Anthropic Constitutional AI
https://www.anthropic.com/news/claudes-constitution
To stay in touch, sign up for our newsletter at https://www.superprompt.fm
No reviews yet