leo-yangli / VB-LoRA
This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).
☆37Updated 5 months ago
Alternatives and similar repositories for VB-LoRA:
Users that are interested in VB-LoRA are comparing it to the libraries listed below
- EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning (ACL 2023)☆25Updated last year
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆44Updated 2 weeks ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆55Updated last month
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆52Updated 3 months ago
- ☆100Updated 8 months ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆20Updated 6 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆108Updated 2 weeks ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆23Updated 5 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆29Updated 5 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆26Updated last month
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆59Updated 3 months ago
- iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models☆18Updated last month
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆32Updated 2 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆52Updated 3 weeks ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆54Updated 5 months ago
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆43Updated 4 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- ☆70Updated 2 months ago
- AutoHallusion Codebase (EMNLP 2024)☆18Updated 3 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆87Updated last year
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆72Updated last year
- ☆39Updated 4 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆43Updated 2 months ago
- Adapting LLaMA Decoder to Vision Transformer☆28Updated 10 months ago
- BESA is a differentiable weight pruning technique for large language models.☆14Updated last year
- ☆35Updated 8 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆29Updated 5 months ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆66Updated last year
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 5 months ago
- ☆40Updated 2 months ago