pliang279 / MultiViz
[ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models
☆96Updated 8 months ago
Alternatives and similar repositories for MultiViz:
Users that are interested in MultiViz are comparing it to the libraries listed below
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆72Updated 5 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆66Updated last year
- [TMLR 2022] High-Modality Multimodal Transformer☆114Updated 5 months ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 3 months ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆153Updated 2 years ago
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆72Updated last year
- Code Example for Learning Multimodal Data Augmentation in Feature Space☆42Updated 2 years ago
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆103Updated last month
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆32Updated last month
- The Continual Learning in Multimodality Benchmark☆67Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆77Updated 11 months ago
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆33Updated last year
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆48Updated last year
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆91Updated last year
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆48Updated last year
- Holistic evaluation of multimodal foundation models☆47Updated 8 months ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks☆28Updated 2 years ago
- ☆117Updated 2 years ago
- ☆58Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated last year
- Pytorch implementation of SMIL: Multimodal Learning with Severely Missing Modality (AAAI 2021)☆104Updated 2 years ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆42Updated last year
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆97Updated last year
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆252Updated 8 months ago
- CVPR 2022, Robust Contrastive Learning against Noisy Views☆83Updated 3 years ago
- MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)☆33Updated last year
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆107Updated last year
- [NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models☆68Updated 4 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆82Updated 11 months ago