[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
☆669Jul 22, 2024Updated last year
Alternatives and similar repositories for lorahub
Users that are interested in lorahub are comparing it to the libraries listed below
Sorting:
- Codebase for Merging Language Models (ICML 2024)☆863May 5, 2024Updated last year
- Tools for merging pretrained large language models.☆6,826Updated this week
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,149Jan 11, 2024Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆557Oct 28, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- FuseAI Project☆590Jan 25, 2025Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,144May 8, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,676Apr 17, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,933Mar 14, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,693Aug 14, 2024Updated last year
- ☆210Feb 3, 2024Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- batched loras☆350Sep 6, 2023Updated 2 years ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆822Mar 30, 2024Updated last year
- ☆274Oct 31, 2023Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,843Jun 10, 2024Updated last year
- LOMO: LOw-Memory Optimization☆988Jul 2, 2024Updated last year
- Editing Models with Task Arithmetic☆535Jan 11, 2024Updated 2 years ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆459Apr 18, 2024Updated last year
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- AllenAI's post-training codebase☆3,605Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 3 months ago
- Minimum Description Length probing for neural network representations☆20Jan 28, 2025Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514May 20, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,717Updated this week
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆121Aug 16, 2023Updated 2 years ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,678Oct 28, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆5,544May 21, 2025Updated 9 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆448Oct 16, 2024Updated last year