☆126Jul 6, 2024Updated last year
Alternatives and similar repositories for MoSLoRA
Users that are interested in MoSLoRA are comparing it to the libraries listed below
Sorting:
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆401Apr 29, 2024Updated last year
- ☆43Jul 22, 2024Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Feb 19, 2025Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆176Jan 29, 2026Updated last month
- [SIGIR'24] The official implementation code of MOELoRA.☆188Jul 22, 2024Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated 11 months ago
- Awesome Low-Rank Adaptation☆59Aug 6, 2025Updated 7 months ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- ☆10Apr 16, 2024Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆232Dec 3, 2024Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆36Apr 2, 2025Updated 11 months ago
- ☆34Aug 23, 2023Updated 2 years ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Mar 5, 2024Updated 2 years ago
- ☆18Nov 10, 2024Updated last year
- MegaRAG: Multimodal Graph-based RAG☆37Sep 16, 2025Updated 5 months ago
- ☆218Nov 25, 2025Updated 3 months ago
- ☆152Sep 9, 2024Updated last year
- ☆177Jul 22, 2024Updated last year
- ☆20Oct 13, 2024Updated last year
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆20Jan 16, 2025Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆144Apr 8, 2025Updated 11 months ago
- ☆22Nov 19, 2024Updated last year
- Awesome-Low-Rank-Adaptation☆128Oct 13, 2024Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆61Updated this week
- ☆19Jan 3, 2025Updated last year
- ☆15Mar 20, 2025Updated 11 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- Code and data for QueryAgent(ACL 2024)☆20Dec 19, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆32Jan 28, 2026Updated last month
- ☆111Jan 2, 2025Updated last year
- Code for Neural Networks journal paper - StoCFL: A stochastically clustered federated learning framework for Non-IID data with dynamic cl…☆12Apr 28, 2024Updated last year
- [COLING 2025 Industry] LoRA Soups☆18Nov 29, 2024Updated last year
- ☆30Jan 8, 2026Updated 2 months ago
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 9 months ago
- ☆14Jan 24, 2025Updated last year
- Tuning-Free Image Editing with Fidelity and Editability via Unified Latent Diffusion Model☆13Dec 29, 2024Updated last year
- ☆274Oct 31, 2023Updated 2 years ago