WeiXiongUST / Building-Math-Agents-with-Multi-Turn-Iterative-Preference-LearningLinks
This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DPO and KTO.
☆32Updated last year
Alternatives and similar repositories for Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning
Users that are interested in Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning are comparing it to the libraries listed below
Sorting:
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Directional Preference Alignment☆58Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆39Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆72Updated 11 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆185Updated 8 months ago
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Updated last year
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆63Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆32Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- ☆117Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆58Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated 2 years ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆116Updated 2 years ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆120Updated last week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 6 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 10 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86Updated 8 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆202Updated 9 months ago
- Reinforcing General Reasoning without Verifiers☆96Updated 7 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- A repo for open research on building large reasoning models☆136Updated last week
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Updated 11 months ago
- Code for "Variational Reasoning for Language Models"☆56Updated 4 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆115Updated 2 years ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆96Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago