RLHFlow / Online-RLHF
A recipe for online RLHF and online iterative DPO.
☆502Updated 3 months ago
Alternatives and similar repositories for Online-RLHF:
Users that are interested in Online-RLHF are comparing it to the libraries listed below
- Recipes to train reward model for RLHF.☆1,257Updated last month
- The official implementation of Self-Play Preference Optimization (SPPO)☆508Updated 2 months ago
- RewardBench: the first evaluation tool for reward models.☆532Updated last month
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆249Updated last week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆299Updated 7 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆597Updated 2 months ago
- Codebase for Iterative DPO Using Rule-based Rewards☆227Updated last month
- ☆559Updated last week
- ☆325Updated last month
- A large-scale, fine-grained, diverse preference dataset (and models).☆335Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆251Updated 6 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆424Updated 5 months ago
- A series of technical report on Slow Thinking with LLM☆595Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆851Updated last month
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆454Updated last year
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆422Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆357Updated 2 months ago
- ☆260Updated last week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆660Updated last week
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆312Updated 3 months ago
- ☆504Updated 2 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆542Updated 3 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆207Updated 3 weeks ago
- Repo of paper "Free Process Rewards without Process Labels"☆138Updated 2 weeks ago
- AN O1 REPLICATION FOR CODING☆329Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆819Updated this week
- ☆166Updated last month
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆511Updated 4 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 6 months ago
- Benchmarking LLMs via Uncertainty Quantification☆217Updated last year