TUDB-Labs / Awesome-LLM-LoRALinks
☆15Updated last year
Alternatives and similar repositories for Awesome-LLM-LoRA
Users that are interested in Awesome-LLM-LoRA are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 9 months ago
- Awesome-Low-Rank-Adaptation☆115Updated 10 months ago
- Awesome Low-Rank Adaptation☆43Updated 3 weeks ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆126Updated last month
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆43Updated 2 months ago
- ☆115Updated last year
- AdaMoLE: Adaptive Mixture of LoRA Experts☆36Updated 10 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆154Updated 2 months ago
- ☆151Updated last year
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆51Updated 9 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆259Updated last week
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆61Updated 2 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆180Updated last year
- Code release for VTW (AAAI 2025 Oral)☆49Updated last month
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆145Updated last month
- ☆148Updated 11 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆112Updated 5 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆49Updated 7 months ago
- ☆26Updated last year
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆31Updated 4 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆34Updated last year
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆42Updated 3 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆412Updated last month
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆81Updated last year
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆34Updated 7 months ago
- Cross-Self KV Cache Pruning for Efficient Vision-Language Inference☆10Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆84Updated 6 months ago
- MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer☆46Updated 11 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆99Updated 9 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆344Updated 2 years ago