pliang279 / MultiVizLinks
[ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models
☆96Updated 10 months ago
Alternatives and similar repositories for MultiViz
Users that are interested in MultiViz are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆77Updated 8 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆69Updated last year
- [TMLR 2022] High-Modality Multimodal Transformer☆116Updated 8 months ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 6 months ago
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆77Updated last month
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆157Updated 2 years ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆34Updated 2 years ago
- Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks☆29Updated 3 years ago
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆34Updated 9 months ago
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆36Updated 3 months ago
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆20Updated 8 months ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆94Updated last year
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆48Updated last year
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆44Updated last year
- ☆120Updated 2 years ago
- A curated list of vision-and-language pre-training (VLP). :-)☆59Updated 3 years ago
- Holistic evaluation of multimodal foundation models☆48Updated 11 months ago
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆103Updated 4 months ago
- Code Example for Learning Multimodal Data Augmentation in Feature Space☆43Updated 2 years ago
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆34Updated 2 years ago
- PyTorch implementation of LIMoE☆53Updated last year
- ☆160Updated last month
- MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)☆33Updated last year
- Official Code for ICML 2023 Paper: On the Generalization of Multi-modal Contrastive Learning☆25Updated last year
- This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision…☆30Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆79Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 7 months ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated last year
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆107Updated 2 years ago
- Language Quantized AutoEncoders☆107Updated 2 years ago