CMU-AIRe / MRTLinks
Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".
☆114Updated 3 months ago
Alternatives and similar repositories for MRT
Users that are interested in MRT are comparing it to the libraries listed below
Sorting:
- Repo of paper "Free Process Rewards without Process Labels"☆165Updated 8 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 6 months ago
- ☆215Updated 7 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆318Updated last month
- ☆212Updated 8 months ago
- ☆309Updated 5 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆116Updated 11 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆131Updated 7 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆115Updated 6 months ago
- ☆326Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆176Updated 5 months ago
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆85Updated last week
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆180Updated 3 months ago
- ☆336Updated 3 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 11 months ago
- ☆116Updated 9 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆138Updated 6 months ago
- ☆76Updated 11 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 9 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆71Updated 6 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆83Updated 7 months ago
- ☆67Updated 7 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆139Updated 3 weeks ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆198Updated 2 weeks ago
- A Sober Look at Language Model Reasoning☆87Updated last month
- A repo for open research on building large reasoning models☆112Updated last week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆327Updated last year
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆364Updated last month
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 9 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆249Updated 6 months ago