eric-mitchell / direct-preference-optimizationLinks
Reference implementation for DPO (Direct Preference Optimization)
☆2,792Updated last year
Alternatives and similar repositories for direct-preference-optimization
Users that are interested in direct-preference-optimization are comparing it to the libraries listed below
Sorting:
- A modular RL library to fine-tune language models to human preferences☆2,369Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆894Updated 2 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆931Updated 9 months ago
- O1 Replication Journey☆2,002Updated 10 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,560Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,918Updated 3 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,636Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,214Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,406Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,073Updated 2 years ago
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asy…☆8,476Updated 3 weeks ago
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,223Updated 2 months ago
- AllenAI's post-training codebase☆3,373Updated this week
- Recipes to train reward model for RLHF.☆1,485Updated 7 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,826Updated 10 months ago
- Simple RL training for reasoning☆3,796Updated 4 months ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,377Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,798Updated 5 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,729Updated last year
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,650Updated this week
- Official Repo for Open-Reasoner-Zero☆2,069Updated 6 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,222Updated last year
- A bibliography and survey of the papers surrounding o1☆1,213Updated last year
- Paper List for In-context Learning 🌷☆871Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,760Updated last year
- A library for advanced large language model reasoning☆2,313Updated 5 months ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆1,213Updated 8 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆918Updated last year
- ☆1,320Updated 9 months ago
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓☆3,453Updated 6 months ago