Heidelberg-NLP / MM-SHAP
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
☆26Updated last year
Alternatives and similar repositories for MM-SHAP:
Users that are interested in MM-SHAP are comparing it to the libraries listed below
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆33Updated last year
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆32Updated 6 months ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated last year
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆10Updated 2 weeks ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆48Updated last year
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆41Updated last year
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- ☆57Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆48Updated last month
- This repository is related to 'Intriguing Properties of Hyperbolic Embeddings in Vision-Language Models', published at TMLR (2024), https…☆18Updated 8 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆63Updated 3 weeks ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- ☆29Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆95Updated 7 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆81Updated 10 months ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- ☆67Updated 8 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆77Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆56Updated last year
- visual question answering prompting recipes for large vision-language models☆24Updated 6 months ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆90Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆150Updated 2 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆16Updated 11 months ago
- ☆19Updated 5 months ago
- Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models (ACL-Findings 2024)☆15Updated 11 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆13Updated last month
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆19Updated 2 years ago
- ☆117Updated 2 years ago