A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.
β249Apr 15, 2025Updated 10 months ago
Alternatives and similar repositories for oat-zero
Users that are interested in oat-zero are comparing it to the libraries listed below
Sorting:
- πΎ OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.β633Jan 29, 2026Updated last month
- Simple RL training for reasoningβ3,830Dec 23, 2025Updated 2 months ago
- [COLM 2025] LIMO: Less is More for Reasoningβ1,064Jul 30, 2025Updated 7 months ago
- Reproduce R1 Zero on Logic Puzzleβ2,439Mar 20, 2025Updated 11 months ago
- Official Repo for Open-Reasoner-Zeroβ2,087Jun 2, 2025Updated 9 months ago
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,219Aug 27, 2025Updated 6 months ago
- A series of technical report on Slow Thinking with LLMβ760Aug 13, 2025Updated 6 months ago
- β331May 31, 2025Updated 9 months ago
- Democratizing Reinforcement Learning for LLMsβ5,167Updated this week
- β19May 20, 2025Updated 9 months ago
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.β2,522Updated this week
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".β94Nov 8, 2025Updated 3 months ago
- Minimal reproduction of DeepSeek R1-Zeroβ12,853Updated this week
- Fully open data curation for reasoning modelsβ2,218Dec 2, 2025Updated 3 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)β9,037Feb 21, 2026Updated last week
- β20Apr 16, 2025Updated 10 months ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)β10Jul 15, 2024Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)β84Oct 23, 2024Updated last year
- β225Mar 26, 2025Updated 11 months ago
- verl: Volcano Engine Reinforcement Learning for LLMsβ19,519Updated this week
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"β184May 20, 2025Updated 9 months ago
- Pretraining and inference code for a large-scale depth-recurrent language modelβ864Dec 29, 2025Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β134Mar 21, 2025Updated 11 months ago
- Scalable RL solution for advanced reasoning of language modelsβ1,809Mar 18, 2025Updated 11 months ago
- β335May 24, 2025Updated 9 months ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMsβ13Jun 20, 2025Updated 8 months ago
- [COLM 2025] Official code for "When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoniβ¦β15Oct 31, 2025Updated 4 months ago
- β215Feb 20, 2025Updated last year
- s1: Simple test-time scalingβ6,636Jun 25, 2025Updated 8 months ago
- minimal-cost for training 0.5B R1-Zeroβ808May 14, 2025Updated 9 months ago
- Short RLβ18May 26, 2025Updated 9 months ago
- β19Jun 4, 2025Updated 8 months ago
- A Sober Look at Language Model Reasoningβ93Nov 18, 2025Updated 3 months ago
- β762Dec 23, 2025Updated 2 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"β59Jan 5, 2026Updated last month
- β72Jun 10, 2025Updated 8 months ago
- [ICML 2025] Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Searchβ108Jun 3, 2025Updated 9 months ago
- β324Jul 25, 2024Updated last year
- β25Apr 10, 2025Updated 10 months ago