A large-scale, fine-grained, diverse preference dataset (and models).
☆363Dec 29, 2023Updated 2 years ago
Alternatives and similar repositories for UltraFeedback
Users that are interested in UltraFeedback are comparing it to the libraries listed below
Sorting:
- Generative Judge for Evaluating Alignment☆250Jan 18, 2024Updated 2 years ago
- ☆282Jan 6, 2025Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Mar 7, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,506Sep 8, 2025Updated 5 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆696Feb 16, 2026Updated last week
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- Recipes to train reward model for RLHF.☆1,515Apr 24, 2025Updated 10 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,416Mar 3, 2024Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆904Sep 30, 2025Updated 5 months ago
- Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)☆2,789Mar 13, 2024Updated last year
- ☆26May 30, 2023Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Dec 9, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,036May 31, 2024Updated last year
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 4 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,855Aug 11, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,585Nov 24, 2025Updated 3 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Sep 18, 2025Updated 5 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,816Jun 17, 2025Updated 8 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409May 17, 2024Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆285Aug 20, 2023Updated 2 years ago
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆222Aug 10, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,592Updated this week
- Directional Preference Alignment☆58Sep 23, 2024Updated last year
- ☆1,560Feb 20, 2026Updated last week
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Dec 20, 2023Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,576Mar 27, 2023Updated 2 years ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Jun 25, 2024Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,766Aug 4, 2024Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,091Jun 1, 2023Updated 2 years ago
- ☆324Jul 25, 2024Updated last year
- Aligning Large Language Models with Human: A Survey☆741Sep 11, 2023Updated 2 years ago
- Simple next-token-prediction for RLHF☆229Sep 30, 2023Updated 2 years ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆386Oct 4, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,378Mar 1, 2024Updated last year