raghavc / LLM-RLHF-Tuning-with-PPO-and-DPOLinks
Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model training, and support for PPO and DPO algorithms with various configurations for the Alpaca, LLaMA, and LLaMA2 models.
☆179Updated last year
Alternatives and similar repositories for LLM-RLHF-Tuning-with-PPO-and-DPO
Users that are interested in LLM-RLHF-Tuning-with-PPO-and-DPO are comparing it to the libraries listed below
Sorting:
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆132Updated last year
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆142Updated 9 months ago
- Tina: Tiny Reasoning Models via LoRA☆309Updated 2 months ago
- ☆117Updated 10 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆192Updated last year
- minimal GRPO implementation from scratch☆100Updated 9 months ago
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆148Updated 10 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆472Updated last year
- A simple unified framework for evaluating LLMs☆255Updated 8 months ago
- Minimal hackable GRPO implementation☆303Updated 10 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆382Updated 5 months ago
- ☆320Updated last year
- ☆226Updated 9 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆115Updated 6 months ago
- ☆100Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆268Updated last month
- [NeurIPS 2024] GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations☆67Updated last year
- ☆159Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆579Updated last month
- A compact LLM pretrained in 9 days by using high quality data☆336Updated 8 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆218Updated 2 years ago
- An implemtation of Everyting of Thoughts (XoT).☆155Updated last year
- ☆122Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆179Updated 5 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆133Updated 7 months ago
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 10 months ago
- X-LoRA: Mixture of LoRA Experts☆255Updated last year
- augmented LLM with self reflection☆135Updated 2 years ago