This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.
☆329Jan 29, 2026Updated last month
Alternatives and similar repositories for MCTS-DPO
Users that are interested in MCTS-DPO are comparing it to the libraries listed below
Sorting:
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆692Jan 20, 2025Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆392Jan 19, 2025Updated last year
- ☆342Jun 5, 2025Updated 9 months ago
- ☆130Jun 18, 2024Updated last year
- O1 Replication Journey☆2,000Jan 14, 2025Updated last year
- ☆968Jan 23, 2025Updated last year
- A library for advanced large language model reasoning☆2,336Jun 10, 2025Updated 8 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆285May 26, 2024Updated last year
- ☆102Dec 7, 2023Updated 2 years ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,835Jan 17, 2025Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆169Mar 14, 2025Updated 11 months ago
- Scalable RL solution for advanced reasoning of language models☆1,811Mar 18, 2025Updated 11 months ago
- Large Reasoning Models☆807Dec 3, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- ☆72Apr 2, 2024Updated last year
- ☆1,033Dec 17, 2024Updated last year
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆38Jul 25, 2024Updated last year
- [NeurIPS 2023] We use large language models as commonsense world model and heuristic policy within Monte-Carlo Tree Search, enabling bett…☆298Nov 16, 2024Updated last year
- ☆552Jan 2, 2025Updated last year
- Official Repo for Open-Reasoner-Zero☆2,084Jun 2, 2025Updated 9 months ago
- ☆23Jul 5, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,517Apr 24, 2025Updated 10 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Mar 21, 2025Updated 11 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆66Oct 18, 2024Updated last year
- ☆41Jun 19, 2024Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆32Jun 25, 2025Updated 8 months ago
- ☆51Oct 28, 2024Updated last year
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.☆6,896Dec 17, 2025Updated 2 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Oct 30, 2024Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆121Dec 10, 2024Updated last year
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆20Nov 21, 2024Updated last year
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 6 months ago
- Simple RL training for reasoning☆3,830Dec 23, 2025Updated 2 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆186May 25, 2025Updated 9 months ago
- Code for Quiet-STaR☆741Aug 21, 2024Updated last year
- ☆325Jul 25, 2024Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86May 21, 2025Updated 9 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,096Jun 1, 2023Updated 2 years ago
- Directional Preference Alignment☆58Sep 23, 2024Updated last year