YingWANGG / M2IBLinks
Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution
☆64Updated last year
Alternatives and similar repositories for M2IB
Users that are interested in M2IB are comparing it to the libraries listed below
Sorting:
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆95Updated 8 months ago
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆59Updated last year
- The repo for "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance", ICML 2024☆52Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆44Updated last year
- ☆64Updated 3 months ago
- PMR: Prototypical Modal Rebalance for Multimodal Learning☆44Updated 2 years ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆105Updated last year
- ☆46Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 11 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆174Updated 2 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆281Updated 2 years ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆97Updated 6 months ago
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆226Updated 2 years ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆112Updated last year
- [ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection☆88Updated 2 months ago
- ☆54Updated last year
- The repo for "Diagnosing and Re-learning for Balanced Multi-modal Learning", ECCV 2024☆30Updated last year
- ☆105Updated 2 years ago
- This is the official code for NeurIPS 2023 paper "Learning Unseen Modality Interaction"☆17Updated last year
- [CVPR 2024] TEA: Test-time Energy Adaptation☆71Updated last year
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆33Updated 2 years ago
- Code for dmrnet☆29Updated 6 months ago
- ☆55Updated last year
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆47Updated 10 months ago
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆21Updated last year
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆50Updated 6 months ago
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆97Updated 3 months ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆119Updated last year
- Twin Contrastive Learning with Noisy Labels (CVPR 2023)☆73Updated 2 years ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆69Updated 2 years ago