PKU-Alignment / safe-rlhfLinks
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
☆1,536Updated last month
Alternatives and similar repositories for safe-rlhf
Users that are interested in safe-rlhf are comparing it to the libraries listed below
Sorting:
- Secrets of RLHF in Large Language Models Part I: PPO☆1,399Updated last year
- ☆549Updated 9 months ago
- ☆908Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,752Updated last year
- [NIPS2023] RRHF & Wombat☆811Updated 2 years ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆923Updated 8 months ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,368Updated 2 years ago
- ☆922Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,616Updated 2 years ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,822Updated 8 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,051Updated 2 weeks ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,037Updated last year
- O1 Replication Journey☆2,001Updated 9 months ago
- A very simple GRPO implement for reproducing r1-like LLM thinking.☆1,388Updated 2 months ago
- ☆764Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆825Updated last year
- 面向中文大模型价值观的评估与对齐研究☆536Updated 2 years ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,200Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆671Updated 8 months ago
- Aligning Large Language Models with Human: A Survey☆733Updated 2 years ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,080Updated last year
- 开源SFT数据集整理,随时补充☆545Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,359Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,056Updated 2 years ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆284Updated 2 years ago
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,014Updated last year
- Recipes to train reward model for RLHF.☆1,458Updated 5 months ago
- personal chatgpt☆385Updated 9 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,787Updated 3 months ago
- Paper List for In-context Learning 🌷☆866Updated last year