cmu-l3 / l1
L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning
☆148Updated last week
Alternatives and similar repositories for l1:
Users that are interested in l1 are comparing it to the libraries listed below
- A Survey on Efficient Reasoning for LLMs☆116Updated this week
- Repo of paper "Free Process Rewards without Process Labels"☆138Updated last week
- ☆166Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆101Updated this week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆74Updated last week
- ☆128Updated this week
- ☆260Updated last week
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆95Updated 2 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆131Updated last month
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆98Updated last week
- ☆48Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆64Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆166Updated 2 weeks ago
- ☆143Updated 3 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆51Updated 3 months ago
- ☆103Updated 2 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆62Updated 3 weeks ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆212Updated this week
- A brief and partial summary of RLHF algorithms.☆127Updated 3 weeks ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆158Updated this week
- ☆83Updated 2 weeks ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆165Updated 2 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆119Updated 8 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆299Updated 7 months ago
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆74Updated 3 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆51Updated last month
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆46Updated 4 months ago
- ☆54Updated 5 months ago
- The official repository of the Omni-MATH benchmark.☆77Updated 3 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆84Updated last month