rsshyam / GRPOLinks
☆59Updated 10 months ago
Alternatives and similar repositories for GRPO
Users that are interested in GRPO are comparing it to the libraries listed below
Sorting:
- ☆47Updated 5 months ago
- Natural Language Reinforcement Learning☆89Updated 5 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆95Updated 2 months ago
- ☆141Updated 6 months ago
- ☆114Updated 4 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆59Updated 4 months ago
- ☆86Updated 2 weeks ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆104Updated 4 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆62Updated 3 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆108Updated this week
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆105Updated 3 weeks ago
- ☆53Updated 3 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆58Updated 2 months ago
- ☆40Updated 3 weeks ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆89Updated last week
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆82Updated 2 months ago
- A repository for research on medium sized language models.☆76Updated last year
- MARFT stands for Multi-Agent Reinforcement Fine-Tuning. This repository implements an LLM-based multi-agent reinforcement fine-tuning fra…☆35Updated 2 weeks ago
- ☆59Updated 3 months ago
- Efficient Agent Training for Computer Use☆94Updated last week
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆54Updated last month
- ☆33Updated 8 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆140Updated 3 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆91Updated 3 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆102Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated 9 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆103Updated this week
- The official implementation of the paper "Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction".☆34Updated last year