JerryWu-code / TinyZeroLinks
Deepseek R1 zero tiny version own reproduce on two A100s.
☆71Updated 6 months ago
Alternatives and similar repositories for TinyZero
Users that are interested in TinyZero are comparing it to the libraries listed below
Sorting:
- ☆313Updated 2 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆134Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆248Updated 3 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆310Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆321Updated last year
- A version of verl to support tool use☆341Updated this week
- ☆274Updated 3 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆245Updated 4 months ago
- ☆207Updated 6 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆285Updated last month
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆148Updated 8 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆128Updated 4 months ago
- ☆204Updated 5 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆376Updated 7 months ago
- A series of technical report on Slow Thinking with LLM☆726Updated 2 weeks ago
- The related works and background techniques about Openai o1☆224Updated 7 months ago
- ☆327Updated last month
- Repo of paper "Free Process Rewards without Process Labels"☆162Updated 5 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆218Updated last week
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆191Updated last year
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆101Updated 3 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆660Updated 7 months ago
- ☆129Updated last year
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆298Updated this week
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 5 months ago
- ☆208Updated last week
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆250Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆146Updated 6 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 9 months ago
- A research repo for experiments about Reinforcement Finetuning☆51Updated 4 months ago