eric-mitchell / direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
☆2,024Updated last month
Related projects: ⓘ
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)☆2,026Updated this week
- A modular RL library to fine-tune language models to human preferences☆2,173Updated 6 months ago
- ☆1,194Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,352Updated 6 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆689Updated last week
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,041Updated 6 months ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆1,764Updated this week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,436Updated this week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,561Updated last year
- SimPO: Simple Preference Optimization with a Reference-Free Reward☆640Updated 3 weeks ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆748Updated last year
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆1,857Updated this week
- A library for advanced large language model reasoning☆1,124Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,333Updated 2 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆758Updated 2 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,148Updated last year
- MOSS-RLHF☆1,267Updated 6 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,099Updated 7 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,381Updated 3 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,206Updated 2 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,314Updated 5 months ago
- Aligning Large Language Models with Human: A Survey☆671Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,021Updated 8 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,830Updated 2 weeks ago
- Paper List for In-context Learning 🌷☆783Updated 2 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,698Updated 7 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,306Updated 5 months ago
- Codebase for Merging Language Models (ICML 2024)☆745Updated 4 months ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,585Updated last year
- ☆1,456Updated 3 weeks ago