wutaiqiang / MoSLoRALinks
☆124Updated last year
Alternatives and similar repositories for MoSLoRA
Users that are interested in MoSLoRA are comparing it to the libraries listed below
Sorting:
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆161Updated 5 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆136Updated 7 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆233Updated last year
- ☆151Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆147Updated 4 months ago
- ☆168Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 5 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆197Updated last year
- ☆215Updated last week
- Code release for VTW (AAAI 2025 Oral)☆65Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆53Updated 10 months ago
- ☆192Updated last year
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆59Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆89Updated last year
- (ICLR 2025 Spotlight) DEEM: Official implementation of Diffusion models serve as the eyes of large language models for image perception.☆44Updated 5 months ago
- [NeurIPS 2024] For paper Parameter Competition Balancing for Model Merging☆47Updated last year
- ☆41Updated last year
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆75Updated last month
- ☆27Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆72Updated 9 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆74Updated 2 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆156Updated 2 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆69Updated 5 months ago
- ☆61Updated 7 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆66Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆127Updated 8 months ago