microsoft / rStarLinks
☆1,388Updated 4 months ago
Alternatives and similar repositories for rStar
Users that are interested in rStar are comparing it to the libraries listed below
Sorting:
- [COLM 2025] LIMO: Less is More for Reasoning☆1,062Updated 6 months ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,727Updated 8 months ago
- ☆970Updated last year
- ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning & ReCall: Learning to Reason with Tool Call for LLMs via Rei…☆1,317Updated 8 months ago
- Large Reasoning Models☆807Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,533Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,547Updated this week
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,511Updated 2 weeks ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆567Updated 9 months ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆675Updated 10 months ago
- Official Repo for Open-Reasoner-Zero☆2,087Updated 8 months ago
- Scalable RL solution for advanced reasoning of language models☆1,803Updated 10 months ago
- ☆1,088Updated 3 weeks ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,205Updated 5 months ago
- [NeurIPS 2025] TTRL: Test-Time Reinforcement Learning☆986Updated 4 months ago
- 🔍 Search-o1: Agentic Search-Enhanced Large Reasoning Models [EMNLP 2025]☆1,164Updated 2 months ago
- [NeurIPS 2025 Spotlight] ReasonFlux (long-CoT), ReasonFlux-PRM (process reward model) and ReasonFlux-Coder (code generation)☆519Updated 4 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,496Updated 5 months ago
- ☆1,346Updated last year
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,833Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,301Updated 3 weeks ago
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆881Updated 6 months ago
- Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning☆1,215Updated last week
- ☆762Updated last month
- The official code of ARPO & AEPO☆880Updated last week
- A project to improve skills of large language models☆813Updated this week
- ☆814Updated 8 months ago
- A series of technical report on Slow Thinking with LLM☆759Updated 5 months ago
- Code and implementations for the paper "AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcemen…☆577Updated 4 months ago