policy-gradient / GRPO-ZeroLinks
Implementing DeepSeek R1's GRPO algorithm from scratch
☆1,526Updated 4 months ago
Alternatives and similar repositories for GRPO-Zero
Users that are interested in GRPO-Zero are comparing it to the libraries listed below
Sorting:
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,673Updated last month
- nanoGPT style version of Llama 3.1☆1,417Updated last year
- NanoGPT (124M) in 3 minutes☆3,037Updated last month
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆3,907Updated this week
- Muon is an optimizer for hidden layers in neural networks☆1,547Updated last month
- Textbook on reinforcement learning from human feedback☆1,185Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,065Updated 3 weeks ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆516Updated last month
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,507Updated 7 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,239Updated last week
- Code for BLT research paper☆1,958Updated 2 months ago
- A bibliography and survey of the papers surrounding o1☆1,209Updated 9 months ago
- Muon is Scalable for LLM Training☆1,273Updated 2 weeks ago
- procedural reasoning datasets☆1,045Updated 2 weeks ago
- Recipes to scale inference-time compute of open models☆1,112Updated 2 months ago
- Simple RL training for reasoning☆3,726Updated 2 weeks ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆2,731Updated last week
- UNet diffusion model in pure CUDA☆615Updated last year
- Verifiers for LLM Reinforcement Learning☆1,760Updated this week
- The Multilayer Perceptron Language Model☆558Updated last year
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,505Updated 3 months ago
- The Autograd Engine☆628Updated 11 months ago
- Minimalistic large language model 3D-parallelism training☆2,130Updated last month
- Continuous Thought Machines, because thought takes time and reasoning is a process.☆1,261Updated last month
- [COLM 2025] LIMO: Less is More for Reasoning☆1,003Updated 3 weeks ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆720Updated this week
- Puzzles for learning Triton☆1,925Updated 9 months ago
- Dream 7B, a large diffusion language model☆904Updated 2 months ago
- Best practices & guides on how to write distributed pytorch training code☆467Updated 5 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆812Updated last month