JerryWu-code / TinyZero
Deepseek R1 zero tiny version own reproduce on two A100s.
☆65Updated 3 months ago
Alternatives and similar repositories for TinyZero:
Users that are interested in TinyZero are comparing it to the libraries listed below
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆181Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆112Updated 2 weeks ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆306Updated 9 months ago
- ☆144Updated last month
- ☆192Updated 2 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆44Updated 3 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆138Updated 2 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆145Updated last month
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆90Updated 2 weeks ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆122Updated 9 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆177Updated last month
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆170Updated 3 months ago
- ☆137Updated 5 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆195Updated last month
- ☆138Updated this week
- On Memorization of Large Language Models in Logical Reasoning☆65Updated last month
- ☆163Updated last month
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆34Updated last month
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆56Updated last month
- ☆287Updated last month
- ☆111Updated this week
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆71Updated this week
- ☆150Updated 4 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆71Updated last week
- ☆327Updated 2 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated last month
- A comprehensive collection of process reward models.☆74Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆234Updated 3 weeks ago
- A research repo for experiments about Reinforcement Finetuning☆46Updated 3 weeks ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆51Updated 5 months ago