Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answer based on user queries.
☆159Feb 9, 2024Updated 2 years ago
Alternatives and similar repositories for multi_loras
Users that are interested in multi_loras are comparing it to the libraries listed below
Sorting:
- Low-Rank adapter extraction for fine-tuned transformers models☆180May 2, 2024Updated last year
- A Model Agnostic function to directly remove specified layers from the LLM☆10May 23, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆669Jul 22, 2024Updated last year
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- Codebase for Merging Language Models (ICML 2024)☆863May 5, 2024Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21May 28, 2024Updated last year
- ☆50Mar 14, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Tools for merging pretrained large language models.☆6,826Updated this week
- ☆415Nov 2, 2023Updated 2 years ago
- ☆176Jul 22, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆261Apr 23, 2024Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated 2 years ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,728May 21, 2025Updated 9 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- LLM Chat is an open-source serverless alternative to ChatGPT.☆36Sep 13, 2024Updated last year
- DEPRICATED: See ChiScraper Instead☆17Oct 13, 2024Updated last year
- ☆17Dec 16, 2024Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆72Feb 29, 2024Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240May 26, 2024Updated last year
- An Efficient "Factory" to Build Multiple LoRA Adapters☆372Feb 13, 2025Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- QuIP quantization☆62Mar 17, 2024Updated last year
- ☆21Oct 6, 2023Updated 2 years ago
- Large-scale LLM inference engine☆1,658Feb 17, 2026Updated 2 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Oct 17, 2023Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- ☆129Jan 22, 2024Updated 2 years ago
- ☆274Oct 31, 2023Updated 2 years ago
- ☆202Dec 5, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,049Mar 7, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated last year
- ☆12Aug 1, 2025Updated 7 months ago
- Cleanai (https://github.com/willmil11/cleanai) except I'm making it in c now. Fast and clean from the start this time :)☆17Updated this week
- A library for making RepE control vectors☆691Sep 24, 2025Updated 5 months ago