YuxiXie / MCTS-DPOView external linksLinks
This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.
☆329Jan 29, 2026Updated 2 weeks ago
Alternatives and similar repositories for MCTS-DPO
Users that are interested in MCTS-DPO are comparing it to the libraries listed below
Sorting:
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆690Jan 20, 2025Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆391Jan 19, 2025Updated last year
- ☆342Jun 5, 2025Updated 8 months ago
- ☆130Jun 18, 2024Updated last year
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- ☆970Jan 23, 2025Updated last year
- A library for advanced large language model reasoning☆2,330Jun 10, 2025Updated 8 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆285May 26, 2024Updated last year
- ☆103Dec 7, 2023Updated 2 years ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,830Jan 17, 2025Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Mar 14, 2025Updated 11 months ago
- Scalable RL solution for advanced reasoning of language models☆1,805Mar 18, 2025Updated 10 months ago
- Large Reasoning Models☆806Dec 3, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆8,989Feb 6, 2026Updated last week
- ☆72Apr 2, 2024Updated last year
- ☆1,033Dec 17, 2024Updated last year
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆37Jul 25, 2024Updated last year
- [NeurIPS 2023] We use large language models as commonsense world model and heuristic policy within Monte-Carlo Tree Search, enabling bett…☆295Nov 16, 2024Updated last year
- ☆554Jan 2, 2025Updated last year
- Official Repo for Open-Reasoner-Zero☆2,085Jun 2, 2025Updated 8 months ago
- ☆23Jul 5, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,512Apr 24, 2025Updated 9 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Mar 21, 2025Updated 10 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆65Oct 18, 2024Updated last year
- ☆41Jun 19, 2024Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆32Jun 25, 2025Updated 7 months ago
- ☆51Oct 28, 2024Updated last year
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.☆6,889Dec 17, 2025Updated 2 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Oct 30, 2024Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Dec 10, 2024Updated last year
- A series of technical report on Slow Thinking with LLM☆759Aug 13, 2025Updated 6 months ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆20Nov 21, 2024Updated last year
- Simple RL training for reasoning☆3,827Dec 23, 2025Updated last month
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆186May 25, 2025Updated 8 months ago
- Code for Quiet-STaR☆740Aug 21, 2024Updated last year
- ☆322Jul 25, 2024Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86May 21, 2025Updated 8 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,092Jun 1, 2023Updated 2 years ago
- Directional Preference Alignment☆58Sep 23, 2024Updated last year