ContextualAI / HALOs
A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).
☆817Updated 2 weeks ago
Alternatives and similar repositories for HALOs:
Users that are interested in HALOs are comparing it to the libraries listed below
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆850Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆473Updated 2 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆421Updated 5 months ago
- Official repository for ORPO☆444Updated 9 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆705Updated 5 months ago
- Generative Representational Instruction Tuning☆610Updated last week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆659Updated this week
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆514Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆542Updated 3 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆543Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆597Updated last year
- ☆501Updated 4 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆619Updated 8 months ago
- Codebase for Merging Language Models (ICML 2024)☆801Updated 10 months ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆766Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆453Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆393Updated 10 months ago
- Aligning Large Language Models with Human: A Survey☆725Updated last year
- Scalable toolkit for efficient model alignment☆750Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,313Updated this week
- Representation Engineering: A Top-Down Approach to AI Transparency☆807Updated 7 months ago
- Code for Quiet-STaR☆721Updated 7 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,130Updated 10 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆333Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆316Updated last year
- A bibliography and survey of the papers surrounding o1☆1,182Updated 4 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,372Updated 11 months ago
- ☆325Updated last month