RLHFlow / RLHF-Reward-Modeling
Recipes to train reward model for RLHF.
☆1,257Updated last month
Alternatives and similar repositories for RLHF-Reward-Modeling:
Users that are interested in RLHF-Reward-Modeling are comparing it to the libraries listed below
- A recipe for online RLHF and online iterative DPO.☆502Updated 3 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆854Updated last month
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆597Updated 2 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,339Updated last year
- ☆507Updated 2 months ago
- ☆574Updated 2 weeks ago
- A series of technical report on Slow Thinking with LLM☆595Updated last week
- The official implementation of Self-Play Preference Optimization (SPPO)☆515Updated 2 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆822Updated last week
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,736Updated 2 months ago
- Large Reasoning Models☆800Updated 3 months ago
- O1 Replication Journey☆1,980Updated 2 months ago
- minimal-cost for training 0.5B R1-Zero☆673Updated this week
- RewardBench: the first evaluation tool for reward models.☆532Updated last month
- ☆913Updated 2 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆664Updated 2 weeks ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆915Updated this week
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆424Updated 5 months ago
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆321Updated 3 months ago
- A bibliography and survey of the papers surrounding o1☆1,183Updated 4 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆598Updated last year
- Scalable toolkit for efficient model alignment☆753Updated last week
- ☆325Updated last month
- Recipes to scale inference-time compute of open models☆1,048Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆301Updated 7 months ago
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆423Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆542Updated 3 months ago
- Official Repo for Open-Reasoner-Zero☆1,687Updated 3 weeks ago
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,268Updated this week
- ☆493Updated last week