A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).
☆904Sep 30, 2025Updated 5 months ago
Alternatives and similar repositories for HALOs
Users that are interested in HALOs are comparing it to the libraries listed below
Sorting:
- Robust recipes to align language models with human and AI preferences☆5,506Sep 8, 2025Updated 5 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,855Aug 11, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- Official repository for ORPO☆471May 31, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,515Apr 24, 2025Updated 10 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- RewardBench: the first evaluation tool for reward models.☆696Feb 16, 2026Updated 2 weeks ago
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Dec 9, 2024Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 4 months ago
- AllenAI's post-training codebase☆3,592Updated this week
- A recipe for online RLHF and online iterative DPO.☆539Dec 28, 2024Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆363Dec 29, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated 2 weeks ago
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago
- ☆16Jul 23, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,108Updated this week
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated last week
- Scalable RL solution for advanced reasoning of language models☆1,809Mar 18, 2025Updated 11 months ago
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,306Dec 9, 2025Updated 2 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,339Updated this week
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆391Jan 19, 2025Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514May 20, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- ☆130Oct 1, 2024Updated last year
- Simple RL training for reasoning☆3,830Dec 23, 2025Updated 2 months ago
- Official Repo for Open-Reasoner-Zero☆2,087Jun 2, 2025Updated 8 months ago
- Democratizing Reinforcement Learning for LLMs☆5,167Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Mar 6, 2025Updated 11 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,558Jan 14, 2026Updated last month
- YaRN: Efficient Context Window Extension of Large Language Models☆1,673Apr 17, 2024Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,903Updated this week
- A modular RL library to fine-tune language models to human preferences☆2,378Mar 1, 2024Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,677Oct 28, 2024Updated last year