FarinaMatteo / multiflow
[CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".
β22Updated last month
Alternatives and similar repositories for multiflow:
Users that are interested in multiflow are comparing it to the libraries listed below
- Official Repository of "On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers" (Visual Continual Learninβ¦β8Updated last year
- [CVPR-25π₯] Test-time Counterattacks (TTC) towards adversarial robustness of CLIPβ21Updated last month
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Promptβ¦β39Updated 4 months ago
- β16Updated 5 months ago
- CLIP-MoE: Mixture of Experts for CLIPβ32Updated 6 months ago
- [NeurIPS '24] Frustratingly easy Test-Time Adaptation of VLMs!!β44Updated last month
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)β46Updated 3 months ago
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Languageβ¦β10Updated 4 months ago
- ECCV24, NeurIPS24, Benchmarking Generalized Out-of-Distribution Detection with Vision-Language Modelsβ23Updated 4 months ago
- β24Updated 10 months ago
- β20Updated last year
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.β70Updated last year
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Modelsβ23Updated 2 months ago
- Less is More: High-value Data Selection for Visual Instruction Tuningβ12Updated 3 months ago
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhaβ¦β53Updated last year
- Instruction Tuning in Continual Learning paradigmβ47Updated 3 months ago
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.β50Updated 5 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"β19Updated last week
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023β34Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"β78Updated last year
- This is the official PyTorch Implementation of "SoTTA: Robust Test-Time Adaptation on Noisy Data Streams (NeurIPS '23)" by Taesik Gong*, β¦β21Updated last year
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Modelsβ86Updated 6 months ago
- β62Updated 7 months ago
- [ICLR24] AutoVP: An Automated Visual Prompting Framework and Benchmarkβ19Updated last year
- The PyTorch implementation for "DEAL: Disentangle and Localize Concept-level Explanations for VLMs" (ECCV 2024 Strong Double Blind)β19Updated 5 months ago
- Official code for paper "Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models, ICML2024"β24Updated 3 months ago
- Official Implementation of paper "Distilling Long-tailed Datasets"β13Updated 2 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Modelsβ47Updated 4 months ago
- Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment, arXiv 2024 / CVPR 2025β27Updated 2 months ago
- β13Updated 2 years ago