eric-mitchell / direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
☆2,377Updated 6 months ago
Alternatives and similar repositories for direct-preference-optimization:
Users that are interested in direct-preference-optimization are comparing it to the libraries listed below
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆803Updated last week
- A modular RL library to fine-tune language models to human preferences☆2,274Updated 11 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆4,809Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆821Updated this week
- Secrets of RLHF in Large Language Models Part I: PPO☆1,318Updated 11 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,127Updated 11 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,654Updated last month
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,446Updated 11 months ago
- O1 Replication Journey☆1,947Updated last month
- A curated list of reinforcement learning with human feedback resources (continually updated)☆3,729Updated this week
- Paper List for In-context Learning 🌷☆836Updated 4 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,413Updated 8 months ago
- Recipes to train reward model for RLHF.☆1,177Updated last week
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆760Updated last year
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,651Updated last month
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,426Updated 7 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,681Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,498Updated 3 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,421Updated 10 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,117Updated 9 months ago
- Aligning Large Language Models with Human: A Survey☆718Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,646Updated 6 months ago
- A bibliography and survey of the papers surrounding o1☆1,155Updated 3 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,900Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,282Updated last year
- [NIPS2023] RRHF & Wombat☆799Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,743Updated last week
- Codebase for Merging Language Models (ICML 2024)☆795Updated 9 months ago
- Robust recipes to align language models with human and AI preferences☆5,001Updated 3 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,485Updated 8 months ago