Reference implementation for Token-level Direct Preference Optimization(TDPO)
☆152Feb 14, 2025Updated last year
Alternatives and similar repositories for Token-level-Direct-Preference-Optimization
Users that are interested in Token-level-Direct-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆948Feb 16, 2025Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆392Jan 19, 2025Updated last year
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated last year
- ☆14Mar 5, 2024Updated 2 years ago
- The official code for PDVN: Retrosynthetic Planning with Dual Value Networks (ICML 2023)☆32Apr 21, 2024Updated last year
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆38Aug 11, 2024Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,866Aug 11, 2024Updated last year
- ☆16Jul 29, 2025Updated 7 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆63Jun 4, 2024Updated last year
- ☆116Jan 21, 2025Updated last year
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 10 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆193Mar 20, 2025Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Jan 29, 2026Updated last month
- A recipe for online RLHF and online iterative DPO.☆544Dec 28, 2024Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Jan 12, 2024Updated 2 years ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆56Jun 16, 2024Updated last year
- Trust Region Preference Approximation: A simple and stable reinforcement learning algorithm for LLM reasoning☆14Jun 28, 2025Updated 8 months ago
- NeurIPS 2023 paper: De novo Drug Design using Reinforcement Learning with Multiple GPT Agents☆39Mar 27, 2024Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆908Sep 30, 2025Updated 5 months ago
- ☆16Nov 26, 2024Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Aug 2, 2024Updated last year
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Nov 27, 2024Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Apr 15, 2025Updated 11 months ago
- [ICLR 2023] Learnable Randomness Injection (LRI) for interpretable Geometric Deep Learning.☆25Jul 18, 2023Updated 2 years ago
- Align, a general text alignment function☆15Dec 7, 2023Updated 2 years ago
- ☆282Jan 6, 2025Updated last year
- Code for Contrastive Preference Learning (CPL)☆180Nov 22, 2024Updated last year
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- The official implementation of Self-Play Preference Optimization (SPPO)☆584Jan 23, 2025Updated last year
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆20Mar 18, 2025Updated last year
- ☆218Feb 20, 2025Updated last year
- ☆11Dec 28, 2023Updated 2 years ago
- ☆11Jun 30, 2020Updated 5 years ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆29Dec 19, 2023Updated 2 years ago
- Official repository for ORPO☆473May 31, 2024Updated last year
- ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World☆25Jun 17, 2025Updated 9 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,191Updated this week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆694Jan 20, 2025Updated last year