eric-mitchell / direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
☆2,564Updated 9 months ago
Alternatives and similar repositories for direct-preference-optimization
Users that are interested in direct-preference-optimization are comparing it to the libraries listed below
Sorting:
- A modular RL library to fine-tune language models to human preferences☆2,307Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆840Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,740Updated 4 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,462Updated 11 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & LoRA & vLLM & RFT)☆6,661Updated this week
- Secrets of RLHF in Large Language Models Part I: PPO☆1,359Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆894Updated 3 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,162Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,526Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,381Updated last year
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,767Updated 3 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,990Updated last year
- Recipes to train reward model for RLHF.☆1,330Updated 3 weeks ago
- O1 Replication Journey☆1,989Updated 4 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,738Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,641Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,152Updated last year
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,520Updated last month
- Paper List for In-context Learning 🌷☆853Updated 7 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,726Updated 9 months ago
- AllenAI's post-training codebase☆2,950Updated this week
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,329Updated last year
- A bibliography and survey of the papers surrounding o1☆1,192Updated 6 months ago
- Aligning Large Language Models with Human: A Survey☆730Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,405Updated last year
- A library for advanced large language model reasoning☆2,122Updated last month
- Official Repo for Open-Reasoner-Zero☆1,916Updated last month
- Reasoning in LLMs: Papers and Resources, including Chain-of-Thought, OpenAI o1, and DeepSeek-R1 🍓☆3,071Updated last week
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,510Updated 3 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,214Updated last week