hkproj / rlhf-ppoLinks
Notes and commented code for RLHF (PPO)
☆124Updated last year
Alternatives and similar repositories for rlhf-ppo
Users that are interested in rlhf-ppo are comparing it to the libraries listed below
Sorting:
- Minimal hackable GRPO implementation☆321Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆71Updated 10 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆273Updated 3 months ago
- minimal GRPO implementation from scratch☆102Updated 10 months ago
- Survey of Small Language Models from Penn State, ...☆241Updated 3 months ago
- ☆100Updated last year
- Direct Preference Optimization from scratch in PyTorch☆126Updated 9 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆143Updated 8 months ago
- ☆104Updated 6 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 6 months ago
- ☆82Updated last year
- ☆332Updated 8 months ago
- Tina: Tiny Reasoning Models via LoRA☆316Updated 4 months ago
- A project to improve skills of large language models☆804Updated this week
- ☆328Updated 8 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 6 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆282Updated 11 months ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆405Updated 2 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆273Updated last year
- ☆412Updated last year
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆158Updated last year
- ☆112Updated 7 months ago
- LLaMA 2 implemented from scratch in PyTorch☆365Updated 2 years ago
- [NeurIPS 2025] TTRL: Test-Time Reinforcement Learning☆972Updated 4 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆411Updated 2 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Official repository for ORPO☆469Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆260Updated 8 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆623Updated last week
- ☆130Updated last year