sail-sg / lorahub
[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
☆619Updated 8 months ago
Alternatives and similar repositories for lorahub:
Users that are interested in lorahub are comparing it to the libraries listed below
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆393Updated 10 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆514Updated last month
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆597Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆543Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆646Updated 9 months ago
- Codebase for Merging Language Models (ICML 2024)☆801Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆542Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆817Updated 2 weeks ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆448Updated 11 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆499Updated 10 months ago
- [NIPS2023] RRHF & Wombat☆804Updated last year
- ☆253Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆475Updated 2 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆705Updated 5 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆421Updated 5 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆316Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆454Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,140Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆393Updated 5 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆470Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆766Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆236Updated last year
- Official repository for ORPO☆445Updated 9 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆552Updated last year
- Generative Representational Instruction Tuning☆610Updated last week
- DSIR large-scale data selection framework for language model training☆244Updated 11 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆850Updated last month
- A large-scale, fine-grained, diverse preference dataset (and models).☆333Updated last year