Heidelberg-NLP / MM-SHAPLinks
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
☆31Updated last year
Alternatives and similar repositories for MM-SHAP
Users that are interested in MM-SHAP are comparing it to the libraries listed below
Sorting:
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆38Updated 2 years ago
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆36Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆12Updated 8 months ago
- Code, data, models for the Sherlock corpus☆58Updated 3 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆49Updated last year
- ☆30Updated 2 years ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆56Updated 8 months ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 2 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆87Updated last year
- Corpus to accompany: "Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest"☆58Updated 8 months ago
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆19Updated 3 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 3 years ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Updated 2 years ago
- ☆59Updated 2 years ago
- ☆24Updated 8 months ago
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆82Updated 5 months ago
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆13Updated last year
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆31Updated 2 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆167Updated 3 years ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆41Updated last year
- ☆120Updated 2 years ago
- ☆23Updated last year
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆35Updated 2 years ago
- [ICLR '25] Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆96Updated 2 weeks ago
- ☆11Updated 2 years ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆286Updated 2 years ago
- ☆130Updated 3 years ago
- "Understanding Dataset Difficulty with V-Usable Information" (ICML 2022, outstanding paper)☆89Updated 2 years ago
- Code to reproduce the experiments in the paper: Does CLIP Bind Concepts? Probing Compositionality in Large Image Models.☆16Updated 2 years ago