WeiXiongUST / Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning
This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DPO and KTO.
☆25Updated 4 months ago
Alternatives and similar repositories for Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning:
Users that are interested in Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning are comparing it to the libraries listed below
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆29Updated 7 months ago
- Directional Preference Alignment☆57Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- ☆65Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆136Updated 2 months ago
- GenRM-CoT: Data release for verification rationales☆56Updated 6 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆101Updated 4 months ago
- Rewarded soups official implementation☆58Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆51Updated 5 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆57Updated 4 months ago
- ☆96Updated 9 months ago
- ☆107Updated 3 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆151Updated 5 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆53Updated 10 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆60Updated 4 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆143Updated last month
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆47Updated last month
- ☆63Updated 5 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆91Updated 3 weeks ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated last week
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆75Updated 8 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆57Updated 6 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated 10 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 3 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆52Updated 4 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 5 months ago
- ☆30Updated 5 months ago
- ☆13Updated 9 months ago