Heidelberg-NLP / MM-SHAPLinks
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
☆32Updated last year
Alternatives and similar repositories for MM-SHAP
Users that are interested in MM-SHAP are comparing it to the libraries listed below
Sorting:
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆99Updated last year
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆36Updated last year
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆38Updated 2 years ago
- ☆30Updated 2 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆49Updated 2 years ago
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆84Updated 7 months ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆58Updated 9 months ago
- [ICLR '25] Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆95Updated 2 months ago
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆12Updated 10 months ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆169Updated 3 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Updated last year
- ☆59Updated 2 years ago
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆32Updated 2 months ago
- [ICLR 23 spotlight] An automatic and efficient tool to describe functionalities of individual neurons in DNNs☆59Updated 2 years ago
- ☆16Updated 3 years ago
- ☆124Updated 2 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated 2 years ago
- Official code and dataset for our NAACL 2024 paper: DialogCC: An Automated Pipeline for Creating High-Quality Multi-modal Dialogue Datase…☆13Updated last year
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆18Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Updated 2 years ago
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Updated last year
- Holistic evaluation of multimodal foundation models☆49Updated last year
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆34Updated 2 years ago
- Official PyTorch Implementation for the "What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-mod…☆19Updated last year
- ☆17Updated last year
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆41Updated last year
- Corpus to accompany: "Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest"☆59Updated 10 months ago
- Code for "Enhancing In-context Learning via Linear Probe Calibration"☆37Updated last year