anthropics / hh-rlhfLinks

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1,743Updated last year

Alternatives and similar repositories for hh-rlhf

Users that are interested in hh-rlhf are comparing it to the libraries listed below

Sorting: