RewardBench: the first evaluation tool for reward models.
☆707Feb 16, 2026Updated 2 months ago
Alternatives and similar repositories for reward-bench
Users that are interested in reward-bench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Recipes to train reward model for RLHF.☆1,529Apr 24, 2025Updated 11 months ago
- A recipe for online RLHF and online iterative DPO.☆543Dec 28, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,340Updated this week
- Scalable toolkit for efficient model alignment☆852Oct 6, 2025Updated 6 months ago
- Robust recipes to align language models with human and AI preferences☆5,558Apr 8, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆160Nov 23, 2024Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,883Aug 11, 2024Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,966Aug 9, 2025Updated 8 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆903Sep 30, 2025Updated 6 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆951Feb 16, 2025Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,837Jun 17, 2025Updated 10 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,117Jun 1, 2023Updated 2 years ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,015Jun 21, 2025Updated 9 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,596Nov 24, 2025Updated 4 months ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- A large-scale, fine-grained, diverse preference dataset (and models).☆367Dec 29, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,683Apr 13, 2026Updated last week
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,743Jan 8, 2024Updated 2 years ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,237May 8, 2024Updated last year
- Train transformer language models with reinforcement learning.☆18,054Updated this week
- GenRM-CoT: Data release for verification rationales☆68Oct 16, 2024Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated 2 years ago
- Official Repo for Open-Reasoner-Zero☆2,091Jun 2, 2025Updated 10 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆314Jun 9, 2024Updated last year
- O1 Replication Journey☆2,000Jan 14, 2025Updated last year
- ☆1,129Jan 10, 2026Updated 3 months ago
- Simple RL training for reasoning☆3,845Dec 23, 2025Updated 3 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆701Jan 20, 2025Updated last year
- A framework for few-shot evaluation of language models.☆12,138Apr 8, 2026Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,789Updated this week
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆189May 20, 2025Updated 11 months ago
- Directional Preference Alignment☆61Sep 23, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆592Dec 9, 2024Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,348Dec 9, 2025Updated 4 months ago
- Generative Judge for Evaluating Alignment☆249Jan 18, 2024Updated 2 years ago
- ☆284Jan 6, 2025Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,845Mar 18, 2025Updated last year