phonism / CP-ZeroLinks
Based on the R1-Zero method, using rule-based rewards and GRPO on the Code Contests dataset.
β18Updated 9 months ago
Alternatives and similar repositories for CP-Zero
Users that are interested in CP-Zero are comparing it to the libraries listed below
Sorting:
- Reproducing R1 for Code with Reliable Rewardsβ282Updated 8 months ago
- [NeurIPS'24] Official code for *π―DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*β120Updated last year
- Revisiting Mid-training in the Era of Reinforcement Learning Scalingβ182Updated 6 months ago
- β215Updated 11 months ago
- β50Updated 5 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratingsβ61Updated 11 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluationsβ143Updated 2 months ago
- Resources for the Enigmata Project.β76Updated 5 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ120Updated 8 months ago
- β67Updated last year
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]β95Updated 9 months ago
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolutionβ104Updated 4 months ago
- The official repository of the Omni-MATH benchmark.β93Updated last year
- Async pipelined version of Verlβ124Updated 9 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Modelsβ138Updated last month
- β87Updated 5 months ago
- A Sober Look at Language Model Reasoningβ92Updated 2 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejectionβ55Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β245Updated 4 months ago
- β80Updated 10 months ago
- β78Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents