pliang279 / MultiViz
[ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models
☆97Updated 8 months ago
Alternatives and similar repositories for MultiViz
Users that are interested in MultiViz are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆74Updated 6 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆66Updated last year
- [TMLR 2022] High-Modality Multimodal Transformer☆115Updated 6 months ago
- Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks☆28Updated 2 years ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 4 months ago
- This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision…☆28Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆77Updated 11 months ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆98Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆155Updated 2 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆48Updated last year
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆33Updated last year
- ☆118Updated 2 years ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆91Updated 11 months ago
- The Continual Learning in Multimodality Benchmark☆67Updated last year
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆50Updated last year
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆74Updated last year
- A curated list of vision-and-language pre-training (VLP). :-)☆58Updated 2 years ago
- Code Example for Learning Multimodal Data Augmentation in Feature Space☆43Updated 2 years ago
- ☆76Updated last month
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆33Updated last month
- ☆59Updated last year
- Holistic evaluation of multimodal foundation models☆47Updated 9 months ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆252Updated 9 months ago
- Code for the paper "A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others"☆48Updated 10 months ago
- offical implementation of "Calibrating Multimodal Learning" on ICML 2023☆21Updated last year
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆19Updated 6 months ago
- This is the official implementation of the Concept Discovery Models paper.☆13Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆61Updated last month
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆83Updated last year
- ☆155Updated 3 years ago