eric-mitchell / direct-preference-optimizationLinks
Reference implementation for DPO (Direct Preference Optimization)
☆2,812Updated last year
Alternatives and similar repositories for direct-preference-optimization
Users that are interested in direct-preference-optimization are comparing it to the libraries listed below
Sorting:
- A modular RL library to fine-tune language models to human preferences☆2,378Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆896Updated 2 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆934Updated 10 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,925Updated 4 months ago
- O1 Replication Journey☆2,001Updated 11 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,642Updated last year
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,830Updated 11 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,217Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & TIS & vLLM & Ray & Dynamic Sampling…☆8,625Updated this week
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,081Updated 2 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,569Updated 3 weeks ago
- AllenAI's post-training codebase☆3,456Updated this week
- Simple RL training for reasoning☆3,808Updated 4 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,408Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,227Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,378Updated 2 years ago
- ☆1,327Updated 9 months ago
- Recipes to train reward model for RLHF.☆1,488Updated 7 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,405Updated last year
- Official Repo for Open-Reasoner-Zero☆2,079Updated 6 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,531Updated 2 years ago
- A bibliography and survey of the papers surrounding o1☆1,214Updated last year
- Robust recipes to align language models with human and AI preferences☆5,453Updated 3 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,732Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,808Updated 6 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,651Updated last year
- A very simple GRPO implement for reproducing r1-like LLM thinking.☆1,499Updated last month
- Paper List for In-context Learning 🌷☆871Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,239Updated last week
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,849Updated 2 weeks ago