[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
☆668Jul 22, 2024Updated last year
Alternatives and similar repositories for lorahub
Users that are interested in lorahub are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Codebase for Merging Language Models (ICML 2024)☆864May 5, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,903Jan 21, 2024Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆557Oct 28, 2023Updated 2 years ago
- FuseAI Project☆592Jan 25, 2025Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- ☆212Feb 3, 2024Updated 2 years ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,154Jan 11, 2024Updated 2 years ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- batched loras☆351Sep 6, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆274Oct 31, 2023Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,148May 8, 2024Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Nov 26, 2023Updated 2 years ago
- ☆13Jul 25, 2023Updated 2 years ago
- Editing Models with Task Arithmetic☆534Jan 11, 2024Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,686Apr 17, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,697Aug 14, 2024Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- X-LoRA: Mixture of LoRA Experts☆267Aug 4, 2024Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆372Jun 1, 2023Updated 2 years ago
- [SIGIR'24] The official implementation code of MOELoRA.☆191Jul 22, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆822Mar 30, 2024Updated last year
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆464Apr 18, 2024Updated last year
- AllenAI's post-training codebase☆3,643Updated this week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Robust recipes to align language models with human and AI preferences☆5,535Sep 8, 2025Updated 6 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Feb 9, 2023Updated 3 years ago
- LOMO: LOw-Memory Optimization☆989Jul 2, 2024Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆128Mar 30, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,781Updated this week