RLHFlow / Online-RLHF
A recipe for online RLHF and online iterative DPO.
☆451Updated last month
Alternatives and similar repositories for Online-RLHF:
Users that are interested in Online-RLHF are comparing it to the libraries listed below
- Recipes to train reward model for RLHF.☆1,016Updated this week
- The official implementation of Self-Play Preference Optimization (SPPO)☆507Updated 3 weeks ago
- RewardBench: the first evaluation tool for reward models.☆462Updated this week
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆226Updated 2 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆397Updated 2 months ago
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆395Updated 10 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆229Updated 4 months ago
- ☆266Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆446Updated 8 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆321Updated 11 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆225Updated 3 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆386Updated last month
- ☆320Updated 6 months ago
- ☆230Updated 4 months ago
- ☆586Updated 2 weeks ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆319Updated last year
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆525Updated last month
- Benchmarking LLMs via Uncertainty Quantification☆228Updated 10 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,308Updated 9 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆233Updated 6 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆756Updated last month
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆173Updated last year
- Generative Judge for Evaluating Alignment☆220Updated 11 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆764Updated this week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆506Updated last week
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆285Updated 3 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆302Updated 5 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆117Updated 7 months ago
- RLHF implementation details of OAI's 2019 codebase☆159Updated 11 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆364Updated 2 months ago