jie040109 / MLAE
The official PyTorch implementation of the paper "MLAE: Masked LoRA Experts for Visual Parameter-Efficient Fine-Tuning"
☆25Updated last month
Alternatives and similar repositories for MLAE:
Users that are interested in MLAE are comparing it to the libraries listed below
- CLIP-MoE: Mixture of Experts for CLIP☆23Updated 3 months ago
- ☆14Updated 2 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆17Updated 4 months ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆65Updated last year
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 9 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆22Updated 2 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 6 months ago
- Instruction Tuning in Continual Learning paradigm☆38Updated last month
- [Preprint] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆59Updated 4 months ago
- ☆40Updated last month
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆33Updated 9 months ago
- Code release for VTW (AAAI 2025) Oral☆28Updated this week
- LCA-on-the-line (ICML 2024 Oral)☆11Updated 3 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆91Updated 2 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆40Updated 2 months ago
- ☆91Updated 6 months ago
- The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsifi…☆15Updated last month
- Adapting LLaMA Decoder to Vision Transformer☆26Updated 7 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆34Updated last month
- ☆22Updated 7 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆77Updated 9 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆75Updated last month
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆148Updated last month
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆45Updated 8 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆85Updated last year
- Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models☆86Updated 10 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 8 months ago
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆47Updated 2 months ago