raghavc / LLM-RLHF-Tuning-with-PPO-and-DPOLinks
Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model training, and support for PPO and DPO algorithms with various configurations for the Alpaca, LLaMA, and LLaMA2 models.
☆168Updated last year
Alternatives and similar repositories for LLM-RLHF-Tuning-with-PPO-and-DPO
Users that are interested in LLM-RLHF-Tuning-with-PPO-and-DPO are comparing it to the libraries listed below
Sorting:
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆130Updated 10 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆137Updated 6 months ago
- augmented LLM with self reflection☆132Updated last year
- ☆122Updated last year
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆122Updated 7 months ago
- Tina: Tiny Reasoning Models via LoRA☆281Updated last month
- ☆116Updated 7 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆470Updated last year
- We present the first systematic study on the scaling property of raw agents instantiated by LLMs. We find that performance scales with th…☆130Updated 11 months ago
- Controlled Text Generation via Language Model Arithmetic☆223Updated last year
- ☆319Updated 11 months ago
- ☆129Updated last year
- ☆150Updated 9 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆189Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 7 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆353Updated 2 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆279Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 7 months ago
- A simple unified framework for evaluating LLMs☆243Updated 5 months ago
- minimal GRPO implementation from scratch☆96Updated 6 months ago
- ☆100Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆105Updated 3 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆209Updated 2 years ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆118Updated 4 months ago
- An implemtation of Everyting of Thoughts (XoT).☆148Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆113Updated last week
- A compact LLM pretrained in 9 days by using high quality data☆323Updated 5 months ago
- ☆144Updated last year
- X-LoRA: Mixture of LoRA Experts☆242Updated last year
- Minimal hackable GRPO implementation☆282Updated 7 months ago