alphadl / R1Links
πenhanced GRPO with more verifiable rewards and real-time evaluators
β37Updated 4 months ago
Alternatives and similar repositories for R1
Users that are interested in R1 are comparing it to the libraries listed below
Sorting:
- [2025-TMLR] A Survey on the Honesty of Large Language Modelsβ60Updated 10 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β89Updated this week
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"β50Updated 11 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ189Updated last year
- The code and data for the paper JiuZhang3.0β49Updated last year
- my commonly-used toolsβ61Updated 9 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.β81Updated 8 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learningβ165Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Modelsβ57Updated last year
- [MM 2025] CMM-Math: A Chinese Multimodal Math Dataset To Evaluate and Enhance the Mathematics Reasoning of Large Multimodal Modelsβ42Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".β131Updated 11 months ago
- β17Updated 2 years ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ63Updated 3 months ago
- β27Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learningβ40Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?β83Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shapingβ54Updated 4 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluationsβ135Updated 6 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β113Updated last month
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β68Updated 11 months ago
- β51Updated 5 months ago
- β275Updated 3 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Modelsβ56Updated last year
- [ACL 2025] A Neural-Symbolic Self-Training Frameworkβ115Updated 4 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ178Updated 3 months ago
- β46Updated 6 months ago
- [NeurIPS'24] Official code for *π―DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*β115Updated 10 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β130Updated 7 months ago
- β18Updated 10 months ago
- [EMNLP 2025] CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Rewardβ49Updated 2 months ago