eric-mitchell / direct-preference-optimizationLinks
Reference implementation for DPO (Direct Preference Optimization)
☆2,779Updated last year
Alternatives and similar repositories for direct-preference-optimization
Users that are interested in direct-preference-optimization are comparing it to the libraries listed below
Sorting:
- A modular RL library to fine-tune language models to human preferences☆2,363Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆926Updated 8 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆892Updated last month
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,554Updated 2 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asy…☆8,341Updated last week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,633Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,896Updated 3 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,068Updated 2 years ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,209Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,402Updated last year
- O1 Replication Journey☆2,002Updated 10 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,823Updated 9 months ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,374Updated 2 years ago
- AllenAI's post-training codebase☆3,284Updated last week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,792Updated 4 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,514Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,756Updated last year
- ☆1,309Updated 8 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,725Updated last year
- Recipes to train reward model for RLHF.☆1,475Updated 6 months ago
- Official Repo for Open-Reasoner-Zero☆2,060Updated 5 months ago
- Simple RL training for reasoning☆3,784Updated 3 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,213Updated last year
- A bibliography and survey of the papers surrounding o1☆1,208Updated 11 months ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,622Updated this week
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,200Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,634Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,519Updated 2 years ago
- A very simple GRPO implement for reproducing r1-like LLM thinking.☆1,438Updated 3 months ago