THUDM / ReST-MCTSLinks
ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)
☆683Updated 10 months ago
Alternatives and similar repositories for ReST-MCTS
Users that are interested in ReST-MCTS are comparing it to the libraries listed below
Sorting:
- A series of technical report on Slow Thinking with LLM☆751Updated 4 months ago
- ☆551Updated 11 months ago
- ☆341Updated 6 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆326Updated last year
- ☆1,024Updated 5 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆388Updated 10 months ago
- A version of verl to support diverse tool use☆722Updated 2 weeks ago
- ☆968Updated 10 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆383Updated 2 months ago
- ☆319Updated 6 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆509Updated last year
- Large Reasoning Models☆807Updated last year
- RewardBench: the first evaluation tool for reward models.☆667Updated 6 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆248Updated 7 months ago
- ☆328Updated 6 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆933Updated 9 months ago
- [TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models