Reference implementation for Token-level Direct Preference Optimization(TDPO)
☆151Feb 14, 2025Updated last year
Alternatives and similar repositories for Token-level-Direct-Preference-Optimization
Users that are interested in Token-level-Direct-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆391Jan 19, 2025Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated last year
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- ☆16Jul 29, 2025Updated 7 months ago
- Align, a general text alignment function☆15Dec 7, 2023Updated 2 years ago
- ☆14Mar 5, 2024Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Apr 15, 2025Updated 10 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆193Mar 20, 2025Updated 11 months ago
- A recipe for online RLHF and online iterative DPO.☆539Dec 28, 2024Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Jan 12, 2024Updated 2 years ago
- Recipes to train reward model for RLHF.☆1,515Apr 24, 2025Updated 10 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,855Aug 11, 2024Updated last year
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆38Aug 11, 2024Updated last year
- ☆16Nov 26, 2024Updated last year
- [ICML 2024] Code for the paper "Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases"☆38Jul 12, 2024Updated last year
- ☆282Jan 6, 2025Updated last year
- ☆28May 24, 2025Updated 9 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Jan 29, 2026Updated last month
- The official code for PDVN: Retrosynthetic Planning with Dual Value Networks (ICML 2023)☆31Apr 21, 2024Updated last year
- The official implementation of Self-Exploring Language Models (SELM)☆63Jun 4, 2024Updated last year
- Latest Evaluation Toolkit (LatestEval). Assessing the language models with latest, uncontaminated materials.☆28Feb 17, 2025Updated last year
- Code for Contrastive Preference Learning (CPL)☆179Nov 22, 2024Updated last year
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆21Mar 18, 2025Updated 11 months ago
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Nov 27, 2024Updated last year
- ☆16Oct 21, 2024Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Jun 28, 2024Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆904Sep 30, 2025Updated 5 months ago
- Official implement of MIA-DPO☆70Jan 23, 2025Updated last year
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Dec 19, 2023Updated 2 years ago
- ☆215Feb 20, 2025Updated last year
- Implementation of ICLR 2025 paper "Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting Mitigation"☆18Oct 5, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆23Feb 9, 2025Updated last year
- Official code for ICLR 2024 paper "Do Generated Data Always Help Contrastive Learning?"☆31Apr 4, 2024Updated last year
- Enabling Mixed Opponent Strategy Script and Self-play on SMAC☆41Jul 24, 2025Updated 7 months ago
- Official repository for ORPO☆471May 31, 2024Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Aug 2, 2024Updated last year