LeapLabTHU / limit-of-RLVR
repo for paper https://arxiv.org/abs/2504.13837
☆122Updated 3 weeks ago
Alternatives and similar repositories for limit-of-RLVR
Users that are interested in limit-of-RLVR are comparing it to the libraries listed below
Sorting:
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆90Updated last week
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- ☆131Updated this week
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆107Updated 3 weeks ago
- Code for "Reasoning to Learn from Latent Thoughts"☆94Updated last month
- ☆168Updated last month
- ☆43Updated last month
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆54Updated last week
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆56Updated 2 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆125Updated last week
- Repo of paper "Free Process Rewards without Process Labels"☆147Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆78Updated 3 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆109Updated last week
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆113Updated last week
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆121Updated last month
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆173Updated last week
- ☆97Updated last month
- ☆165Updated last month
- ☆75Updated 4 months ago
- ☆79Updated 3 weeks ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆74Updated 2 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆69Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆202Updated this week
- ☆97Updated 2 months ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆45Updated this week
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆122Updated last month
- ☆196Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 3 months ago
- A comprehensive collection of process reward models.☆76Updated last week
- ☆291Updated 2 months ago