leo-yangli / VB-LoRALinks
This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).
☆38Updated 8 months ago
Alternatives and similar repositories for VB-LoRA
Users that are interested in VB-LoRA are comparing it to the libraries listed below
Sorting:
- Data distillation benchmark☆66Updated last week
- ICLR 2025☆26Updated last month
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆30Updated 8 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆39Updated 2 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆22Updated 2 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated last year
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆19Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆19Updated 2 months ago
- One-shot Entropy Minimization☆149Updated 2 weeks ago
- EMPO, A Fully Unsupervised RLVR Method☆40Updated 2 weeks ago
- ☆105Updated 11 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆62Updated 3 weeks ago
- ☆42Updated 7 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆52Updated 3 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆120Updated 2 months ago
- Code release for VTW (AAAI 2025) Oral☆43Updated 5 months ago
- VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆47Updated 3 weeks ago
- Parameter-Efficient Fine-Tuning for Foundation Models☆69Updated 2 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆78Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆28Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆42Updated 8 months ago
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆30Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- Prioritize Alignment in Dataset Distillation☆20Updated 6 months ago
- Awesome-Low-Rank-Adaptation☆104Updated 8 months ago
- (ICLR2025 Spotlight) DEEM: Official implementation of Diffusion models serve as the eyes of large language models for image perception.☆34Updated 3 months ago
- iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models☆19Updated 4 months ago
- Elucidated Dataset Condensation (NeurIPS 2024)☆21Updated 8 months ago