raghavc / LLM-RLHF-Tuning-with-PPO-and-DPOLinks
Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model training, and support for PPO and DPO algorithms with various configurations for the Alpaca, LLaMA, and LLaMA2 models.
☆171Updated last year
Alternatives and similar repositories for LLM-RLHF-Tuning-with-PPO-and-DPO
Users that are interested in LLM-RLHF-Tuning-with-PPO-and-DPO are comparing it to the libraries listed below
Sorting:
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆129Updated 8 months ago
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆129Updated 10 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆139Updated 7 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆470Updated last year
- minimal GRPO implementation from scratch☆98Updated 6 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- ☆116Updated 8 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆211Updated 2 years ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆230Updated 2 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆106Updated 4 months ago
- Tina: Tiny Reasoning Models via LoRA☆290Updated 2 weeks ago
- Evaluating LLMs with fewer examples☆161Updated last year
- ☆320Updated last year
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆76Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- ☆100Updated last year
- Official repository for ORPO☆463Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated 11 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 8 months ago
- augmented LLM with self reflection☆132Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆173Updated 3 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆217Updated 2 months ago
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆187Updated last month
- A simple unified framework for evaluating LLMs☆250Updated 5 months ago
- Controlled Text Generation via Language Model Arithmetic☆223Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆361Updated last year
- An implemtation of Everyting of Thoughts (XoT).☆150Updated last year
- ☆122Updated last year
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆282Updated last year
- Code for the paper: "Learning to Reason without External Rewards"☆360Updated 3 months ago