OpenBMB / RLPRLinks
Extrapolating RLVR to General Domains without Verifiers
☆168Updated last month
Alternatives and similar repositories for RLPR
Users that are interested in RLPR are comparing it to the libraries listed below
Sorting:
- ☆271Updated 3 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆131Updated 5 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆157Updated 2 weeks ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆340Updated 2 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆330Updated this week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆256Updated 4 months ago
- A comprehensive collection of process reward models.☆110Updated this week
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆137Updated 2 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆69Updated 6 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆303Updated last week
- ☆211Updated 7 months ago
- ☆333Updated 2 months ago
- 🔧Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆264Updated last month
- ☆297Updated 4 months ago
- ☆154Updated 4 months ago
- Towards a Unified View of Large Language Model Post-Training☆144Updated last month
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆152Updated 2 weeks ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆164Updated last week
- Test-time preferenece optimization (ICML 2025).☆168Updated 5 months ago
- A version of verl to support diverse tool use☆570Updated last week
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆81Updated 4 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs