RLHFlow / Online-RLHFLinks
A recipe for online RLHF and online iterative DPO.
☆537Updated 11 months ago
Alternatives and similar repositories for Online-RLHF
Users that are interested in Online-RLHF are comparing it to the libraries listed below
Sorting:
- Recipes to train reward model for RLHF.☆1,486Updated 7 months ago
- ☆251Updated 6 months ago
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆454Updated last year
- The official implementation of Self-Play Preference Optimization (SPPO)☆584Updated 10 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆276Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆663Updated 5 months ago
- Codebase for Iterative DPO Using Rule-based Rewards☆263Updated 7 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆229Updated 9 months ago
- Controllable Text Generation for Large Language Models: A Survey☆195Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆680Updated 10 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆327Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,406Updated last year
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆498Updated 4 months ago
- ☆327Updated 6 months ago
- ☆341Updated 6 months ago
- ☆315Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆388Updated 10 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆198Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆931Updated 9 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆356Updated last year
- ☆213Updated 9 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆248Updated 7 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆283Updated last year
- adds Sequence Parallelism into LLaMA-Factory☆596Updated last month
- ☆329Updated 3 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆271Updated last year
- A version of verl to support diverse tool use☆714Updated last week
- Benchmarking LLMs via Uncertainty Quantification☆252Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆576Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆257Updated 6 months ago