RLHFlow / RLHF-Reward-Modeling
Recipes to train reward model for RLHF.
☆1,181Updated last week
Alternatives and similar repositories for RLHF-Reward-Modeling:
Users that are interested in RLHF-Reward-Modeling are comparing it to the libraries listed below
- A recipe for online RLHF and online iterative DPO.☆484Updated last month
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆821Updated this week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆575Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆804Updated last week
- RewardBench: the first evaluation tool for reward models.☆508Updated this week
- Secrets of RLHF in Large Language Models Part I: PPO☆1,318Updated 11 months ago
- ☆476Updated last month
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆630Updated last week
- Large Reasoning Models☆801Updated 2 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆481Updated 3 weeks ago
- ☆320Updated 2 weeks ago
- A series of technical report on Slow Thinking with LLM☆411Updated last week
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆412Updated 4 months ago
- O1 Replication Journey☆1,947Updated last month
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,651Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆700Updated 4 months ago
- ☆890Updated 3 weeks ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆348Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆289Updated 6 months ago
- A bibliography and survey of the papers surrounding o1☆1,160Updated 3 months ago
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆407Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆534Updated 2 months ago
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆266Updated 2 months ago
- Recipes to scale inference-time compute of open models☆1,002Updated last month
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆585Updated 11 months ago
- Scalable RL solution for advanced reasoning of language models☆1,262Updated this week
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆464Updated last month
- A large-scale, fine-grained, diverse preference dataset (and models).☆329Updated last year