PKU-Alignment / safe-rlhfLinks
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
☆1,491Updated last year
Alternatives and similar repositories for safe-rlhf
Users that are interested in safe-rlhf are comparing it to the libraries listed below
Sorting:
- Secrets of RLHF in Large Language Models Part I: PPO☆1,374Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,619Updated 10 months ago
- ☆904Updated 11 months ago
- [NIPS2023] RRHF & Wombat☆808Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,317Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆899Updated 4 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,021Updated 7 months ago
- Aligning Large Language Models with Human: A Survey☆730Updated last year
- ☆540Updated 5 months ago
- Paper List for In-context Learning 🌷☆854Updated 8 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,028Updated 9 months ago
- ☆917Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆638Updated 5 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆813Updated 11 months ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,341Updated last year
- O1 Replication Journey☆1,991Updated 5 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆456Updated 8 months ago
- Recipes to train reward model for RLHF.☆1,386Updated 2 months ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,046Updated last year
- Best practice for training LLaMA models in Megatron-LM☆656Updated last year
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆451Updated 5 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,180Updated last year
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,788Updated 5 months ago
- Awesome RL Reasoning Recipes ("Triple R")☆706Updated last week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆967Updated 6 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆859Updated 2 weeks ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆497Updated last year
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆963Updated last month
- A very simple GRPO implement for reproducing r1-like LLM thinking.☆1,130Updated 2 months ago