ys-zong / awesome-self-supervised-multimodal-learningLinks
[T-PAMI] A curated list of self-supervised multimodal learning resources.
☆254Updated 9 months ago
Alternatives and similar repositories for awesome-self-supervised-multimodal-learning
Users that are interested in awesome-self-supervised-multimodal-learning are comparing it to the libraries listed below
Sorting:
- A Survey on multimodal learning research.☆328Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆156Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆401Updated 8 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated 2 years ago
- A curated list of awesome self-supervised learning methods in videos☆140Updated last month
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆103Updated 3 months ago
- ☆157Updated 3 years ago
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆331Updated last month
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆280Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆165Updated last year
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆340Updated 6 months ago
- ☆517Updated 6 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆425Updated 2 years ago
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆237Updated last year
- PyTorch Reimplementation of LoRA (featuring with supporting nn.MultiheadAttention)☆61Updated 6 months ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆110Updated last year
- The official implementation of CMAE https://arxiv.org/abs/2207.13532 and https://ieeexplore.ieee.org/document/10330745☆103Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆466Updated 2 months ago
- A curated list of papers in Test-time Adaptation, Test-time Training and Source-free Domain Adaptation☆503Updated 11 months ago
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆75Updated 7 months ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆162Updated 9 months ago
- Reading list for research topics in Masked Image Modeling☆333Updated 6 months ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆279Updated last year
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆831Updated 10 months ago
- [TMLR 2022] High-Modality Multimodal Transformer☆115Updated 7 months ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆140Updated 11 months ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆262Updated 8 months ago
- The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-languag…☆229Updated 2 years ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆187Updated last year
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆359Updated 2 years ago