Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
☆1,591Nov 24, 2025Updated 3 months ago
Alternatives and similar repositories for safe-rlhf
Users that are interested in safe-rlhf are comparing it to the libraries listed below
Sorting:
- Secrets of RLHF in Large Language Models Part I: PPO☆1,420Mar 3, 2024Updated 2 years ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,191Mar 16, 2026Updated last week
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,331Dec 9, 2025Updated 3 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,832Jun 17, 2025Updated 9 months ago
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆355Jun 18, 2023Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,739Jan 8, 2024Updated 2 years ago
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,868Aug 11, 2024Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,697Updated this week
- JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.☆1,088Mar 17, 2025Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,382Mar 1, 2024Updated 2 years ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,136Feb 27, 2024Updated 2 years ago
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,286Oct 16, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆704Feb 16, 2026Updated last month
- Example models using DeepSpeed☆6,807Mar 4, 2026Updated 2 weeks ago
- Instruction Tuning with GPT-4☆4,338Jun 11, 2023Updated 2 years ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆176Oct 27, 2023Updated 2 years ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,800Dec 12, 2023Updated 2 years ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,106Jun 1, 2023Updated 2 years ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,610Aug 30, 2023Updated 2 years ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Jan 16, 2025Updated last year
- Robust recipes to align language models with human and AI preferences☆5,527Sep 8, 2025Updated 6 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆948Feb 16, 2025Updated last year
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,677Jul 18, 2024Updated last year
- NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms☆405Mar 20, 2024Updated 2 years ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,097Updated this week
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Aug 4, 2024Updated last year
- Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)☆2,805Mar 13, 2024Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- An open-source tool-augmented conversational language model from Fudan University☆12,096Jul 13, 2024Updated last year
- 面向中文大模型价值观的评估与对齐研究☆554Jul 20, 2023Updated 2 years ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,381Jul 25, 2023Updated 2 years ago
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,055Apr 14, 2024Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,587Mar 27, 2023Updated 2 years ago
- ☆282Jan 6, 2025Updated last year
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 5 months ago
- NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark☆550Dec 4, 2025Updated 3 months ago