anthropics / hh-rlhfView on GitHub
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1,832Jun 17, 2025Updated 9 months ago

Alternatives and similar repositories for hh-rlhf

Users that are interested in hh-rlhf are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?