wutaiqiang / MoSLoRA
☆76Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for MoSLoRA
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆74Updated this week
- ☆116Updated 3 months ago
- [EMNLP 2024 Findings🔥] Official implementation of "LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Infe…☆75Updated last week
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆74Updated 3 weeks ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆102Updated 2 months ago
- ☆23Updated 3 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆72Updated 7 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆29Updated 7 months ago
- ☆147Updated 4 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆41Updated 3 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆17Updated last month
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆83Updated 11 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆50Updated last month
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 6 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆69Updated 8 months ago
- ☆14Updated 5 months ago
- Awesome-Low-Rank-Adaptation☆38Updated last month
- Dataset pruning for ImageNet and LAION-2B.☆69Updated 4 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆25Updated last week
- ☆154Updated last month
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆55Updated last month
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆92Updated 2 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆100Updated 6 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆82Updated last month
- Making LLaVA Tiny via MoE-Knowledge Distillation☆60Updated 3 weeks ago
- A repository for DenseSSMs☆88Updated 7 months ago
- ☆58Updated 3 months ago
- PyTorch implementation of StableMask (ICML'24)☆12Updated 4 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆44Updated this week