Wangmerlyn / MCTS-GSM8k-Demo
This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems
☆72Updated last month
Alternatives and similar repositories for MCTS-GSM8k-Demo:
Users that are interested in MCTS-GSM8k-Demo are comparing it to the libraries listed below
- ☆63Updated 4 months ago
- ☆125Updated 3 weeks ago
- ☆30Updated 4 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆174Updated last week
- ☆187Updated 2 months ago
- ☆101Updated 4 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆95Updated last month
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆133Updated 4 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆64Updated last week
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆61Updated 5 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆68Updated last month
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆75Updated last week
- ☆146Updated last month
- Reformatted Alignment☆115Updated 7 months ago
- ☆93Updated 4 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆52Updated 4 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆115Updated last month
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆107Updated this week
- ☆47Updated 4 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated 10 months ago
- [preprint] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆43Updated 3 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆94Updated last week
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆79Updated 2 months ago
- ☆81Updated last year
- The official repository of the Omni-MATH benchmark.☆80Updated 4 months ago
- MPO: Boosting LLM Agents with Meta Plan Optimization☆50Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆118Updated 5 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆136Updated 2 months ago
- On Memorization of Large Language Models in Logical Reasoning☆63Updated 3 weeks ago
- A Comprehensive Survey on Long Context Language Modeling☆131Updated 3 weeks ago