pliang279 / MultiVizLinks
[ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models
☆96Updated 10 months ago
Alternatives and similar repositories for MultiViz
Users that are interested in MultiViz are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆74Updated 7 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆69Updated last year
- [TMLR 2022] High-Modality Multimodal Transformer☆115Updated 7 months ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 5 months ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆158Updated 2 years ago
- Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks☆28Updated 3 years ago
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆103Updated 4 months ago
- This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision…☆29Updated last year
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆103Updated last year
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆93Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆78Updated last year
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆34Updated 3 months ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆44Updated last year
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆76Updated last week
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆261Updated 10 months ago
- Code Example for Learning Multimodal Data Augmentation in Feature Space☆43Updated 2 years ago
- A curated list of vision-and-language pre-training (VLP). :-)☆59Updated 2 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆48Updated last year
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆91Updated last year
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆52Updated last year
- ☆120Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆63Updated last month
- Holistic evaluation of multimodal foundation models☆47Updated 10 months ago
- MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)☆33Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 6 months ago
- CVPR 2022, Robust Contrastive Learning against Noisy Views☆83Updated 3 years ago
- ☆27Updated 3 years ago
- ☆59Updated last year
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆34Updated 9 months ago
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆29Updated last year