xfactlab / orpoLinks
Official repository for ORPO
☆455Updated last year
Alternatives and similar repositories for orpo
Users that are interested in orpo are comparing it to the libraries listed below
Sorting:
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆857Updated last week
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆713Updated 3 months ago
- ☆520Updated 7 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆462Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 9 months ago
- Generative Representational Instruction Tuning☆651Updated 3 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆498Updated 5 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆731Updated 8 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆421Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆347Updated last year
- Code for Quiet-STaR☆734Updated 10 months ago
- Automatic evals for LLMs☆429Updated 2 weeks ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆410Updated 8 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 7 months ago
- ☆261Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆897Updated 4 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆505Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆243Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆223Updated 7 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- Reproducible, flexible LLM evaluations☆213Updated last month
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆653Updated last year
- distributed trainer for LLMs☆577Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆303Updated last year
- ☆288Updated 10 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆201Updated last week