WeiXiongUST / Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning
This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DPO and KTO.
☆20Updated last month
Alternatives and similar repositories for Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning:
Users that are interested in Building-Math-Agents-with-Multi-Turn-Iterative-Preference-Learning are comparing it to the libraries listed below
- Directional Preference Alignment☆54Updated 4 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆22Updated 4 months ago
- GenRM-CoT: Data release for verification rationales☆45Updated 3 months ago
- Rewarded soups official implementation☆55Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆49Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆112Updated 4 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 6 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆64Updated 5 months ago
- ☆34Updated 11 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated 3 weeks ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆40Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆114Updated 2 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 7 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆24Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 7 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆55Updated last month
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆44Updated 2 months ago
- [EMNLP Findings 2024 & ACL 2024 NLRSE Oral] Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards☆49Updated 8 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆98Updated this week
- ☆47Updated 2 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 10 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆33Updated 4 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆44Updated last month
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆58Updated last week
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆98Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆90Updated last month
- The official implementation of Self-Exploring Language Models (SELM)☆61Updated 7 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 6 months ago
- ☆58Updated 4 months ago