RUCAIBox / Slow_Thinking_with_LLMsLinks
A series of technical report on Slow Thinking with LLM
☆758Updated 5 months ago
Alternatives and similar repositories for Slow_Thinking_with_LLMs
Users that are interested in Slow_Thinking_with_LLMs are comparing it to the libraries listed below
Sorting:
- ☆552Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆688Updated last year
- A version of verl to support diverse tool use☆852Updated 3 weeks ago
- ☆427Updated 3 months ago
- R1-searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning☆683Updated 5 months ago
- ☆1,084Updated 3 weeks ago
- [TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆731Updated 3 months ago
- Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning☆1,201Updated this week
- ☆490Updated 3 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆390Updated last year
- The related works and background techniques about Openai o1☆221Updated last year
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆406Updated 3 months ago
- Large Reasoning Models☆807Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Updated last year
- Scaling Deep Research via Reinforcement Learning in Real-world Environments.☆691Updated 3 months ago
- ☆761Updated last month
- ☆332Updated 8 months ago
- ☆328Updated 8 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆258Updated 5 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆249Updated 9 months ago
- ☆971Updated last year
- [NeurIPS 2025 Spotlight] ReasonFlux (long-CoT), ReasonFlux-PRM (process reward model) and ReasonFlux-Coder (code generation)☆516Updated 4 months ago
- The official code of ARPO & AEPO☆872Updated 3 weeks ago
- ☆341Updated 7 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆510Updated this week
- A live reading list for LLM data synthesis (Updated to July, 2025).☆446Updated 5 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆328Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆153Updated 3 months ago
- ☆322Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆260Updated 8 months ago