raghavc / LLM-RLHF-Tuning-with-PPO-and-DPO
Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model training, and support for PPO and DPO algorithms with various configurations for the Alpaca, LLaMA, and LLaMA2 models.
☆148Updated last year
Alternatives and similar repositories for LLM-RLHF-Tuning-with-PPO-and-DPO:
Users that are interested in LLM-RLHF-Tuning-with-PPO-and-DPO are comparing it to the libraries listed below
- ☆106Updated 2 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆124Updated last month
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆97Updated this week
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 2 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆175Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆320Updated this week
- ☆137Updated 4 months ago
- ☆118Updated 10 months ago
- ☆142Updated 11 months ago
- Implements pre-training, supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF), to train and fine-tune the …☆50Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆186Updated 4 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆181Updated last year
- ☆96Updated 9 months ago
- Notes and commented code for RLHF (PPO)☆85Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆108Updated last week
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆264Updated 10 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆84Updated last month
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆106Updated 7 months ago
- A pipeline for LLM knowledge distillation☆100Updated 2 weeks ago
- A simplified implementation for experimenting with Reinforcement Learning (RL) on GSM8K, inspired by RLVR and Deepseek R1. This repositor…☆74Updated 2 months ago
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆178Updated 2 weeks ago
- Minimal hackable GRPO implementation☆206Updated 2 months ago
- ☆314Updated 7 months ago
- Controlled Text Generation via Language Model Arithmetic☆217Updated 7 months ago
- We present the first systematic study on the scaling property of raw agents instantiated by LLMs. We find that performance scales with th…☆115Updated 6 months ago
- Official repository for ORPO☆448Updated 10 months ago
- A simple unified framework for evaluating LLMs☆210Updated last week
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆127Updated 5 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆195Updated 3 weeks ago
- RewardBench: the first evaluation tool for reward models.☆553Updated last month