Heidelberg-NLP / MM-SHAP
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
☆19Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for MM-SHAP
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆92Updated 3 months ago
- ☆55Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆130Updated 2 years ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆42Updated last year
- Fine-tuning CLIP using ROCO dataset which contains image-caption pairs from PubMed articles.☆137Updated 3 months ago
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 2 years ago
- ViLLA: Fine-grained vision-language representation learning from real-world data☆40Updated last year
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆68Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆70Updated 9 months ago
- Code for the paper "RECAP: Towards Precise Radiology Report Generation via Dynamic Disease Progression Reasoning" (EMNLP'23 Findings).☆23Updated 6 months ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- ☆29Updated last year
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.☆34Updated 3 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated last year
- A curated list of vision-and-language pre-training (VLP). :-)☆56Updated 2 years ago
- Code and dataset release for "PACS: A Dataset for Physical Audiovisual CommonSense Reasoning" (ECCV 2022)☆11Updated last year
- Learning to compose soft prompts for compositional zero-shot learning.☆84Updated last year
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆26Updated 2 years ago
- Code for the paper "ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning" (ACL'23).☆49Updated last month
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images, NeurIPS 2023 D&B☆64Updated 3 months ago
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆18Updated 2 years ago
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆64Updated 3 weeks ago
- MedViLL official code. (Published IEEE JBHI 2021)☆89Updated last year
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆32Updated last year
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆83Updated 5 months ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆39Updated 8 months ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆44Updated 9 months ago
- Localized questions for VQA☆9Updated last year
- ☆40Updated last year
- [ACL 2024] FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model☆10Updated 2 months ago