YingWANGG / M2IBLinks
Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution
☆53Updated last year
Alternatives and similar repositories for M2IB
Users that are interested in M2IB are comparing it to the libraries listed below
Sorting:
- The repo for "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance", ICML 2024☆43Updated last year
- ☆44Updated 2 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆73Updated 2 months ago
- [ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection☆78Updated last month
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆167Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 5 months ago
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆53Updated 8 months ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆91Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆35Updated 6 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆81Updated 3 months ago
- PMR: Prototypical Modal Rebalance for Multimodal Learning☆39Updated 2 years ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆101Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆76Updated last year
- Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment, arXiv 2024 / CVPR 2025☆30Updated 4 months ago
- Code for dmrnet☆25Updated last month
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆106Updated last year
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆111Updated last year
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆20Updated 8 months ago
- ☆39Updated 8 months ago
- [ICLR 23 oral] The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation☆45Updated 2 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆267Updated last year
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆82Updated 11 months ago
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆45Updated this week
- PyTorch implementation of our CVPR 2024 paper "Unified Entropy Optimization for Open-Set Test-Time Adaptation"☆27Updated 10 months ago
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated 2 years ago
- ☆96Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆50Updated last year
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆101Updated last year
- Domain Generalization through Distilling CLIP with Language Guidance☆30Updated last year