anthropics / hh-rlhf

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1,667Updated last year

Alternatives and similar repositories for hh-rlhf:

Users that are interested in hh-rlhf are comparing it to the libraries listed below