An Efficient "Factory" to Build Multiple LoRA Adapters
☆372Feb 13, 2025Updated last year
Alternatives and similar repositories for mLoRA
Users that are interested in mLoRA are comparing it to the libraries listed below
Sorting:
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Aug 16, 2024Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆133Mar 11, 2025Updated 11 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆188Jul 22, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- batched loras☆350Sep 6, 2023Updated 2 years ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆172Jan 29, 2026Updated last month
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆401Apr 29, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- ☆274Oct 31, 2023Updated 2 years ago
- Dateset Reset Policy Optimization☆31Apr 12, 2024Updated last year
- ☆196Jul 13, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆267Aug 4, 2024Updated last year
- ☆14Apr 29, 2025Updated 10 months ago
- ☆13Jan 22, 2025Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆409Jun 30, 2025Updated 8 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32May 29, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,144May 8, 2024Updated last year
- ☆126Jul 6, 2024Updated last year
- ☆71Mar 26, 2025Updated 11 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆24Sep 23, 2025Updated 5 months ago
- ☆15Nov 7, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆31Jan 28, 2026Updated last month
- ☆148Apr 16, 2024Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Jan 15, 2024Updated 2 years ago
- ☆176Jul 22, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆669Jul 22, 2024Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆416Jun 25, 2025Updated 8 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,728May 21, 2025Updated 9 months ago
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆42Oct 15, 2024Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Mar 5, 2024Updated 2 years ago
- I-SHEEP: Iterative Self-enHancEmEnt Paradigm of LLMs through Self-Instruct and Self-Assessment☆17Jan 16, 2025Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- ☆71Jul 11, 2024Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,230Mar 10, 2024Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆90Nov 13, 2024Updated last year
- 基于Llama3,通过进一步CPT,SFT,ORPO得到的中文版Llama3☆17Apr 24, 2024Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year