PKU-Alignment / safe-rlhfView on GitHub
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
1,595Nov 24, 2025Updated 4 months ago

Alternatives and similar repositories for safe-rlhf

Users that are interested in safe-rlhf are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?