ys-zong / awesome-self-supervised-multimodal-learningLinks
[T-PAMI] A curated list of self-supervised multimodal learning resources.
☆263Updated last year
Alternatives and similar repositories for awesome-self-supervised-multimodal-learning
Users that are interested in awesome-self-supervised-multimodal-learning are comparing it to the libraries listed below
Sorting:
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆158Updated 2 years ago
- A Survey on multimodal learning research.☆329Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆404Updated 11 months ago
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆366Updated 2 years ago
- ☆161Updated 2 months ago
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆78Updated 9 months ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆97Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆169Updated last year
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆214Updated last year
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆229Updated 2 years ago
- The repo for "Balanced Multimodal Learning via On-the-fly Gradient Modulation", CVPR 2022 (ORAL)☆279Updated 7 months ago
- A curated list of awesome self-supervised learning methods in videos☆150Updated 2 weeks ago
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆343Updated 4 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆70Updated last year
- A simple cross attention that updates both the source and target in one step☆176Updated 3 weeks ago
- The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-languag…☆230Updated 3 years ago
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆242Updated last year
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 7 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆284Updated last year
- CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021☆64Updated 3 years ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆192Updated 2 years ago
- Reading list for research topics in Masked Image Modeling☆336Updated 8 months ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆195Updated 2 years ago
- Exploring Visual Prompts for Adapting Large-Scale Models☆282Updated 3 years ago
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆103Updated 6 months ago
- The official implementation of CMAE https://arxiv.org/abs/2207.13532 and https://ieeexplore.ieee.org/document/10330745☆108Updated last year
- ☆62Updated 2 years ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆289Updated last month
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆56Updated last year
- A python implement for Certifiable Robust Multi-modal Training☆18Updated 2 months ago