RLHFlow / RLHF-Reward-ModelingLinks
Recipes to train reward model for RLHF.
☆1,477Updated 6 months ago
Alternatives and similar repositories for RLHF-Reward-Modeling
Users that are interested in RLHF-Reward-Modeling are comparing it to the libraries listed below
Sorting:
- A recipe for online RLHF and online iterative DPO.☆536Updated 10 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆926Updated 9 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,403Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆678Updated 9 months ago
- ☆995Updated 4 months ago
- Large Reasoning Models☆806Updated 11 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,823Updated 9 months ago
- A series of technical report on Slow Thinking with LLM☆746Updated 3 months ago
- Train your Agent model via our easy and efficient framework☆1,613Updated last week
- ☆548Updated 10 months ago
- O1 Replication Journey☆2,002Updated 10 months ago
- ☆963Updated 9 months ago
- RewardBench: the first evaluation tool for reward models.☆653Updated 5 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆786Updated 7 months ago
- verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is also the official code for paper "Group-in…☆1,154Updated 3 weeks ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆892Updated last month
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆489Updated 3 months ago
- minimal-cost for training 0.5B R1-Zero☆784Updated 6 months ago
- Official Repo for Open-Reasoner-Zero☆2,060Updated 5 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆502Updated last year
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆449Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,767Updated 7 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,148Updated 2 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,779Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆386Updated 9 months ago
- Scalable toolkit for efficient model alignment☆844Updated last month
- A bibliography and survey of the papers surrounding o1☆1,209Updated last year
- Recipes to scale inference-time compute of open models☆1,117Updated 5 months ago
- [TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆685Updated 3 weeks ago
- [NeurIPS 2025] TTRL: Test-Time Reinforcement Learning☆887Updated last month