open-thought / tiny-grpoLinks
Minimal hackable GRPO implementation
β300Updated 9 months ago
Alternatives and similar repositories for tiny-grpo
Users that are interested in tiny-grpo are comparing it to the libraries listed below
Sorting:
- Tina: Tiny Reasoning Models via LoRAβ304Updated last month
- πΎ OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.β564Updated 2 weeks ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)β125Updated 6 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"β269Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β355Updated 11 months ago
- Large Reasoning Modelsβ807Updated 11 months ago
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.β144Updated 9 months ago
- minimal GRPO implementation from scratchβ99Updated 8 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"β336Updated this week
- A project to improve skills of large language modelsβ608Updated last week
- β326Updated 5 months ago
- [NeurIPS 2025] TTRL: Test-Time Reinforcement Learningβ887Updated last month
- Code for the paper: "Learning to Reason without External Rewards"β373Updated 4 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".β275Updated 8 months ago
- Notes and commented code for RLHF (PPO)β114Updated last year
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"β556Updated last month
- β94Updated 5 months ago
- An extension of the nanoGPT repository for training small MOE models.β210Updated 8 months ago
- Exploring Applications of GRPOβ248Updated 2 months ago
- SkyRL: A Modular Full-stack RL Library for LLMsβ1,202Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,148Updated 2 months ago
- β995Updated 4 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and trainingβ284Updated last year
- Parallel Scaling Law for Language Model β Beyond Parameter and Inference Time Scalingβ450Updated 5 months ago
- Recipes to scale inference-time compute of open modelsβ1,117Updated 5 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocationβ322Updated 6 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)β678Updated 9 months ago
- AN O1 REPLICATION FOR CODINGβ337Updated 11 months ago
- Deepseek R1 zero tiny version own reproduce on two A100s.β73Updated 9 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.β222Updated 3 months ago