Official repository for ORPO
☆478May 31, 2024Updated last year
Alternatives and similar repositories for orpo
Users that are interested in orpo are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆903Sep 30, 2025Updated 6 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆948Feb 16, 2025Updated last year
- Robust recipes to align language models with human and AI preferences☆5,558Apr 8, 2026Updated last week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated 2 years ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,876Aug 11, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆592Dec 9, 2024Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,237May 8, 2024Updated last year
- ☆131Oct 1, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆707Feb 16, 2026Updated 2 months ago
- A recipe for online RLHF and online iterative DPO.☆543Dec 28, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,564Mar 5, 2026Updated last month
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,158Apr 6, 2026Updated last week
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated last month
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆323Sep 18, 2024Updated last year
- Scalable toolkit for efficient model alignment☆852Oct 6, 2025Updated 6 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- AllenAI's post-training codebase☆3,683Updated this week
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 10 months ago
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Aug 2, 2024Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Jan 16, 2025Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆522Oct 20, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,065Apr 25, 2025Updated 11 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,966Aug 9, 2025Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Aug 30, 2024Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,015Jun 21, 2025Updated 9 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆96Aug 20, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆202Dec 16, 2023Updated 2 years ago
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- Train transformer language models with reinforcement learning.☆18,054Updated this week
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 6 months ago
- Recipes to train reward model for RLHF.☆1,527Apr 24, 2025Updated 11 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆928Feb 26, 2026Updated last month
- Training LLMs with QLoRA + FSDP☆1,538Nov 9, 2024Updated last year
- A bagel, with everything.☆326Apr 11, 2024Updated 2 years ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,644Apr 7, 2026Updated last week