Heidelberg-NLP / MM-SHAP
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
☆28Updated last year
Alternatives and similar repositories for MM-SHAP
Users that are interested in MM-SHAP are comparing it to the libraries listed below
Sorting:
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆33Updated last year
- ☆118Updated 2 years ago
- ☆76Updated last month
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated last year
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆33Updated 7 months ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆48Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆57Updated last year
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆11Updated last month
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆79Updated last year
- ☆22Updated last month
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆54Updated 3 months ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆97Updated 8 months ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆154Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- Measuring the Mixing of Contextual Information in the Transformer☆29Updated last year
- Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)☆58Updated 11 months ago
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 2 years ago
- ☆30Updated last year
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆67Updated 2 months ago
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆19Updated 2 years ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆39Updated last year
- M-HalDetect Dataset Release☆25Updated last year
- ☆59Updated last year
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆11Updated 5 months ago
- ☆29Updated 2 years ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆92Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- [ACL 2024] FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model☆15Updated 2 weeks ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆77Updated 11 months ago