[NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward
☆946Feb 16, 2025Updated last year
Alternatives and similar repositories for SimPO
Users that are interested in SimPO are comparing it to the libraries listed below
Sorting:
- Reference implementation for DPO (Direct Preference Optimization)☆2,859Aug 11, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 5 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆906Sep 30, 2025Updated 5 months ago
- Recipes to train reward model for RLHF.☆1,517Apr 24, 2025Updated 10 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Feb 14, 2025Updated last year
- Official repository for ORPO☆471May 31, 2024Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆392Jan 19, 2025Updated last year
- RewardBench: the first evaluation tool for reward models.☆697Feb 16, 2026Updated 2 weeks ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago
- AllenAI's post-training codebase☆3,605Updated this week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- A recipe for online RLHF and online iterative DPO.☆540Dec 28, 2024Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆829Mar 17, 2025Updated 11 months ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,003Jun 21, 2025Updated 8 months ago
- Simple RL training for reasoning☆3,830Dec 23, 2025Updated 2 months ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆56Aug 13, 2024Updated last year
- Tools for merging pretrained large language models.☆6,826Updated this week
- Scalable toolkit for efficient model alignment☆849Oct 6, 2025Updated 4 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,589Nov 24, 2025Updated 3 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,094Jun 1, 2023Updated 2 years ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆582Jan 23, 2025Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Oct 20, 2024Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,416Mar 3, 2024Updated 2 years ago
- ☆325Jul 25, 2024Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Nov 3, 2024Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,809Mar 18, 2025Updated 11 months ago
- Official Repo for Open-Reasoner-Zero☆2,087Jun 2, 2025Updated 9 months ago
- ☆1,104Jan 10, 2026Updated last month
- Democratizing Reinforcement Learning for LLMs☆5,167Updated this week
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- A framework for few-shot evaluation of language models.☆11,540Updated this week
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 6 months ago
- ☆320Sep 18, 2024Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆486Mar 19, 2024Updated last year