phonism / CP-ZeroLinks
Based on the R1-Zero method, using rule-based rewards and GRPO on the Code Contests dataset.
β18Updated 5 months ago
Alternatives and similar repositories for CP-Zero
Users that are interested in CP-Zero are comparing it to the libraries listed below
Sorting:
- Reproducing R1 for Code with Reliable Rewardsβ258Updated 4 months ago
- [NeurIPS'24] Official code for *π―DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*β113Updated 9 months ago
- β47Updated last month
- Resources for the Enigmata Project.β71Updated last month
- Revisiting Mid-training in the Era of Reinforcement Learning Scalingβ176Updated 2 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluationsβ130Updated 5 months ago
- β209Updated 7 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correctionβ80Updated 6 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ111Updated 4 months ago
- The official repository of the Omni-MATH benchmark.β88Updated 9 months ago
- Async pipelined version of Verlβ117Updated 5 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".β105Updated last month
- Repo of paper "Free Process Rewards without Process Labels"β163Updated 6 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]β88Updated 5 months ago
- [NeurIPS 2025 D&B] π SWE-bench Goes Live!β122Updated this week
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolutionβ84Updated last week
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejectionβ51Updated 10 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]β141Updated last year
- β74Updated 10 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoningβ284Updated last week
- β119Updated 3 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratingsβ53Updated 7 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β226Updated 2 weeks ago
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.β160Updated last month
- β67Updated 5 months ago
- Code for "Reasoning to Learn from Latent Thoughts"β118Updated 5 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyβ69Updated 9 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β82Updated 3 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]β171Updated 3 months ago
- β332Updated last month