sail-sg / oat-zeroLinks
A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.
☆240Updated 2 months ago
Alternatives and similar repositories for oat-zero
Users that are interested in oat-zero are comparing it to the libraries listed below
Sorting:
- ☆202Updated 4 months ago
- ☆297Updated 3 weeks ago
- A Comprehensive Survey on Long Context Language Modeling☆151Updated 2 weeks ago
- ☆217Updated 3 weeks ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆186Updated 3 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆220Updated last month
- Repo of paper "Free Process Rewards without Process Labels"☆152Updated 3 months ago
- A series of technical report on Slow Thinking with LLM☆699Updated last week
- ☆152Updated last month
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated 3 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆232Updated 2 weeks ago
- A version of verl to support tool use☆251Updated this week
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆112Updated 2 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆226Updated last year
- ☆186Updated 2 months ago
- The related works and background techniques about Openai o1☆221Updated 5 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆157Updated 2 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆639Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆158Updated last month
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆264Updated 9 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆185Updated last year
- ☆169Updated this week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆303Updated last month
- ☆288Updated 10 months ago
- ☆231Updated last week
- slime is a LLM post-training framework aiming at scaling RL.☆328Updated this week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆317Updated 10 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆219Updated last month
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆122Updated this week
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated 2 weeks ago