sail-sg / oat-zeroLinks
A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.
☆239Updated last month
Alternatives and similar repositories for oat-zero
Users that are interested in oat-zero are comparing it to the libraries listed below
Sorting:
- ☆201Updated 3 months ago
- A Comprehensive Survey on Long Context Language Modeling☆147Updated last week
- ☆198Updated last week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆213Updated 2 weeks ago
- ☆293Updated this week
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆205Updated this week
- Reproducing R1 for Code with Reliable Rewards☆201Updated 3 weeks ago
- Repo of paper "Free Process Rewards without Process Labels"☆149Updated 2 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆179Updated 2 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆218Updated last year
- A series of technical report on Slow Thinking with LLM☆679Updated this week
- ☆208Updated last week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆314Updated 9 months ago
- ☆193Updated this week
- The related works and background techniques about Openai o1☆221Updated 4 months ago
- ☆150Updated last month
- ☆282Updated 10 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated 2 months ago
- ☆145Updated last week
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆141Updated 5 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆368Updated 4 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆299Updated last month
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆100Updated this week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆182Updated this week
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆184Updated 2 months ago
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆343Updated last week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆630Updated 4 months ago
- verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is also the official code for paper "Group-in…☆232Updated this week
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆326Updated 8 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆155Updated last week