lsdefine / simple_GRPOLinks
A very simple GRPO implement for reproducing r1-like LLM thinking.
☆1,219Updated last week
Alternatives and similar repositories for simple_GRPO
Users that are interested in simple_GRPO are comparing it to the libraries listed below
Sorting:
- Reproduce R1 Zero on Logic Puzzle☆2,380Updated 4 months ago
- Official Repo for Open-Reasoner-Zero☆2,008Updated 2 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆3,166Updated last week
- Awesome RL Reasoning Recipes ("Triple R")☆762Updated last month
- Latest Advances on System-2 Reasoning☆1,200Updated last month
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,470Updated 2 months ago
- ☆734Updated 2 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,805Updated 6 months ago
- O1 Replication Journey☆1,996Updated 6 months ago
- minimal-cost for training 0.5B R1-Zero☆763Updated 2 months ago
- ☆544Updated 7 months ago
- Distributed RL System for LLM Reasoning☆2,090Updated this week
- Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning☆697Updated last week
- A series of technical report on Slow Thinking with LLM☆713Updated last month
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆1,545Updated 2 weeks ago
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,167Updated last week
- A fork to add multimodal model training to open-r1☆1,346Updated 5 months ago
- llm & rl☆172Updated this week
- 欢迎来到 LLM-Dojo,这里是一个开源大模型学习场所,使用简洁且易阅读的代码构建模型训练框架(支持各种主流模型如Qwen、Llama、GLM等等)、RLHF框架(DPO/CPO/KTO/PPO)等各种功能。👩🎓👨🎓☆812Updated 3 weeks ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆796Updated 2 months ago
- Scalable RL solution for advanced reasoning of language models☆1,668Updated 4 months ago
- Awesome RL-based LLM Reasoning☆568Updated 2 weeks ago
- 这是一个从头训练大语言模型的项目,包括预训练、微调和直接偏好优化,模型拥有1B参数,支持中英文。☆522Updated 5 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆654Updated 6 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆703Updated 3 months ago
- Simple RL training for reasoning☆3,693Updated 3 months ago
- Large Reasoning Models☆804Updated 7 months ago
- 这是一个open-r1的复现项目,对0.5B、1.5B、3B、7B的qwen模型进行GRPO训练,观察到一些有趣的现象。☆40Updated 3 months ago
- ☆857Updated last month
- Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆538Updated last month