cmu-l3 / l1Links
L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning
☆213Updated 3 weeks ago
Alternatives and similar repositories for l1
Users that are interested in l1 are comparing it to the libraries listed below
Sorting:
- ☆293Updated this week
- Repo of paper "Free Process Rewards without Process Labels"☆149Updated 2 months ago
- ☆198Updated last week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated 2 months ago
- ☆173Updated 2 months ago
- ☆201Updated 3 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆121Updated 2 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆151Updated last month
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆145Updated 2 months ago
- ☆64Updated last month
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆228Updated this week
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆79Updated 3 months ago
- A Comprehensive Survey on Long Context Language Modeling☆147Updated 2 weeks ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆207Updated 3 weeks ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆205Updated this week
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆53Updated 2 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆141Updated 5 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆106Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆184Updated 2 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆239Updated last month
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆155Updated 2 weeks ago
- ☆208Updated last week
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- official repository for “Reinforcement Learning for Reasoning in Large Language Models with One Training Example”☆257Updated this week
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆120Updated this week
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆220Updated last year
- ☆107Updated last week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆231Updated 3 weeks ago
- Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆414Updated last week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆314Updated 9 months ago