A large-scale, fine-grained, diverse preference dataset (and models).
☆367Dec 29, 2023Updated 2 years ago
Alternatives and similar repositories for UltraFeedback
Users that are interested in UltraFeedback are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Generative Judge for Evaluating Alignment☆249Jan 18, 2024Updated 2 years ago
- ☆284Jan 6, 2025Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Mar 7, 2024Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆843Jul 1, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,587Apr 8, 2026Updated 3 weeks ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- RewardBench: the first evaluation tool for reward models.☆713Feb 16, 2026Updated 2 months ago
- Recipes to train reward model for RLHF.☆1,531Apr 24, 2025Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,424Mar 3, 2024Updated 2 years ago
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆904Sep 30, 2025Updated 7 months ago
- ☆26May 30, 2023Updated 2 years ago
- Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)☆2,827Mar 13, 2024Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,976Aug 9, 2025Updated 8 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,417Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Scalable toolkit for efficient model alignment☆853Oct 6, 2025Updated 6 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆594Dec 9, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,599Nov 24, 2025Updated 5 months ago
- Directional Preference Alignment