Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answer based on user queries.
☆159Feb 9, 2024Updated 2 years ago
Alternatives and similar repositories for multi_loras
Users that are interested in multi_loras are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- ☆50Mar 14, 2024Updated 2 years ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- Codebase for Merging Language Models (ICML 2024)☆864May 5, 2024Updated last year
- ☆415Nov 2, 2023Updated 2 years ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- A Model Agnostic function to directly remove specified layers from the LLM☆10May 23, 2024Updated last year
- Tools for formatting large language model prompts.☆13Dec 19, 2023Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated 2 years ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,739May 21, 2025Updated 10 months ago
- 5X faster 60% less memory QLoRA finetuning☆21May 28, 2024Updated last year
- ☆177Jul 22, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240May 26, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆263Apr 23, 2024Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- ☆212Feb 3, 2024Updated 2 years ago
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆23Mar 4, 2024Updated 2 years ago
- ☆17Dec 16, 2024Updated last year
- ☆11May 11, 2022Updated 3 years ago
- Repo - Paper "Capturing Semantics for Imputation with Pre-trained Language Models." [ICDE 2021]☆10Mar 13, 2022Updated 4 years ago
- ☆21Oct 6, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆14Mar 30, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,903Jan 21, 2024Updated 2 years ago
- Large-scale LLM inference engine☆1,677Mar 12, 2026Updated last week
- A library for making RepE control vectors☆699Sep 24, 2025Updated 6 months ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- Code and data for the VLDB 2023 paper: RECA: Related Tables Enhanced Column Semantic Type Annotation Framework☆12May 7, 2025Updated 10 months ago
- LLM Chat is an open-source serverless alternative to ChatGPT.☆36Sep 13, 2024Updated last year
- Mistral7B playing DOOM☆29Mar 27, 2024Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆43Jun 1, 2023Updated 2 years ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- ☆202Dec 5, 2024Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Oct 17, 2023Updated 2 years ago