anthropics / hh-rlhf

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1,609Updated last year

Related projects

Alternatives and complementary repositories for hh-rlhf