raghavc / LLM-RLHF-Tuning-with-PPO-and-DPO
Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model training, and support for PPO and DPO algorithms with various configurations for the Alpaca, LLaMA, and LLaMA2 models.
☆150Updated last year
Alternatives and similar repositories for LLM-RLHF-Tuning-with-PPO-and-DPO:
Users that are interested in LLM-RLHF-Tuning-with-PPO-and-DPO are comparing it to the libraries listed below
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆103Updated 3 weeks ago
- ☆109Updated 3 months ago
- ☆142Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆338Updated this week
- ☆138Updated 5 months ago
- minimal GRPO implementation from scratch☆87Updated last month
- ☆97Updated 10 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆128Updated 2 months ago
- Minimal hackable GRPO implementation☆217Updated 3 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆90Updated 2 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆464Updated last year
- A simplified implementation for experimenting with Reinforcement Learning (RL) on GSM8K, inspired by RLVR and Deepseek R1. This repositor…☆84Updated 3 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆266Updated 11 months ago
- ☆287Updated last month
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆181Updated last year
- A project to improve skills of large language models☆354Updated this week
- Official repository for ORPO☆451Updated 11 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆322Updated 4 months ago
- RewardBench: the first evaluation tool for reward models.☆562Updated this week
- This is the official repository for Inheritune.☆111Updated 3 months ago
- ☆257Updated last year
- X-LoRA: Mixture of LoRA Experts☆221Updated 9 months ago
- Critique-out-Loud Reward Models☆64Updated 6 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆205Updated 2 years ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆244Updated 3 weeks ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆220Updated last month
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆134Updated 6 months ago
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆128Updated last month
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆190Updated 5 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆234Updated 3 weeks ago