anthropics / hh-rlhf
View external linksLinks

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1,814Jun 17, 2025Updated 7 months ago

Alternatives and similar repositories for hh-rlhf

Users that are interested in hh-rlhf are comparing it to the libraries listed below

Sorting:

Are these results useful?