iamhankai / Forest-of-ThoughtLinks
ICML2025: Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning
☆46Updated 3 months ago
Alternatives and similar repositories for Forest-of-Thought
Users that are interested in Forest-of-Thought are comparing it to the libraries listed below
Sorting:
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆188Updated 4 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆146Updated last month
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆170Updated last week
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆147Updated 7 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆237Updated 2 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆268Updated 5 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆147Updated 2 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 2 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆233Updated 3 months ago
- ☆159Updated 3 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆138Updated 4 months ago
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆153Updated this week
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆86Updated 4 months ago
- ☆262Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆245Updated 3 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆255Updated last month
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆284Updated this week
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆117Updated 5 months ago
- ☆78Updated 4 months ago
- Efficient Agent Training for Computer Use☆122Updated 2 months ago
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆88Updated 7 months ago
- ☆103Updated 8 months ago
- ☆310Updated 2 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆127Updated 3 months ago
- ☆87Updated 2 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆271Updated 3 weeks ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆218Updated this week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆100Updated last week
- ☆206Updated 5 months ago
- ☆323Updated last week