TUDB-Labs / Awesome-LLM-LoRALinks
☆15Updated last year
Alternatives and similar repositories for Awesome-LLM-LoRA
Users that are interested in Awesome-LLM-LoRA are comparing it to the libraries listed below
Sorting:
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆49Updated 2 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆31Updated 7 months ago
- ☆105Updated 11 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆86Updated last year
- Awesome Low-Rank Adaptation☆39Updated 9 months ago
- The official PyTorch implementation of the paper "MLAE: Masked LoRA Experts for Visual Parameter-Efficient Fine-Tuning"☆29Updated 6 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆38Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 4 months ago
- Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design Approach, CVPR 2024☆22Updated 10 months ago
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆20Updated 9 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆203Updated 6 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆19Updated 3 months ago
- [NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"☆52Updated 7 months ago
- Enhance Vision-Language Alignment with Noise (AAAI 2025)☆24Updated 5 months ago
- ☆46Updated 5 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆77Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆31Updated 10 months ago
- ☆25Updated last year
- ☆138Updated 10 months ago
- EMPO, A Fully Unsupervised RLVR Method☆30Updated this week
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆50Updated 10 months ago
- ICLR 2025☆26Updated 2 weeks ago
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆24Updated 2 weeks ago
- A simple implementation of LoRA+: Efficient Low Rank Adaptation of Large Models☆9Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated last year
- Awesome-Low-Rank-Adaptation☆102Updated 7 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆37Updated 7 months ago
- ☆20Updated 6 months ago
- This repository periodicly updates the MTL paper and resources☆55Updated last month
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆99Updated 2 months ago