lzhxmu / CPPO
CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models
☆125Updated 2 weeks ago
Alternatives and similar repositories for CPPO
Users that are interested in CPPO are comparing it to the libraries listed below
Sorting:
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆173Updated last week
- ☆196Updated 2 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆202Updated this week
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- ☆168Updated last month
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆109Updated last week
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆122Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 3 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆121Updated last month
- ☆184Updated last month
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆141Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆158Updated 2 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆137Updated 4 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆107Updated 3 weeks ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆113Updated last week
- ☆291Updated 2 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆147Updated 2 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆210Updated this week
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆96Updated last month
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆72Updated 3 weeks ago
- Paper List of Inference/Test Time Scaling/Computing☆220Updated 2 weeks ago
- ☆151Updated 2 weeks ago
- ☆153Updated 3 weeks ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆141Updated 3 months ago
- repo for paper https://arxiv.org/abs/2504.13837☆122Updated 3 weeks ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆90Updated last week
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆74Updated 2 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆93Updated 2 months ago
- A comprehensive collection of process reward models.☆76Updated last week
- [ICML 2025] | From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation☆90Updated last week