Vance0124 / Token-level-Direct-Preference-OptimizationLinks
Reference implementation for Token-level Direct Preference Optimization(TDPO)
☆147Updated 7 months ago
Alternatives and similar repositories for Token-level-Direct-Preference-Optimization
Users that are interested in Token-level-Direct-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated 2 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆193Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆86Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆184Updated 7 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆162Updated 6 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆57Updated 10 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆320Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆129Updated 5 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆137Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆136Updated last month
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 9 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆128Updated 4 months ago
- ☆205Updated 5 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆29Updated 9 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆37Updated 11 months ago
- ☆122Updated 6 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆321Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆80Updated 3 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆104Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆113Updated 9 months ago
- ☆68Updated last year
- ☆49Updated 10 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆81Updated 8 months ago
- ☆209Updated 6 months ago
- ☆74Updated 9 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆149Updated 10 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 10 months ago
- On Memorization of Large Language Models in Logical Reasoning☆71Updated 5 months ago