junchen14 / Multi-Modal-Transformer
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and self-supervised learning models. Additionally, it also collects many useful tutorials and tools in these related domains.
☆219Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Multi-Modal-Transformer
- Multi-Modal Transformer for Video Retrieval☆258Updated last month
- Video Contrastive Learning with Global Context, ICCVW 2021☆158Updated 2 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval". CVPR 2022☆95Updated 2 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆338Updated 3 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆389Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆288Updated last year
- Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022☆145Updated last year
- PyTorch GPU distributed training code for MIL-NCE HowTo100M☆214Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆185Updated 2 years ago
- PyTorch implementation of BEVT (CVPR 2022) https://arxiv.org/abs/2112.01529☆158Updated 2 years ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆283Updated 2 years ago
- ☆187Updated 2 years ago
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆130Updated 3 years ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆171Updated last year
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆260Updated last month
- METER: A Multimodal End-to-end TransformER Framework☆362Updated 2 years ago
- [NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.☆287Updated 3 years ago
- Implementation of ViViT: A Video Vision Transformer☆515Updated 3 years ago
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval. Also, a text-video retrieval toolbox based on CLIP + fast p…☆126Updated 2 years ago
- Code for the HowTo100M paper☆252Updated 4 years ago
- ☆65Updated 3 years ago
- Code release for ICCV 2021 paper "Anticipative Video Transformer"☆152Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆636Updated 2 years ago
- End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)☆208Updated 10 months ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆136Updated 7 months ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆116Updated last year
- A curated list of awesome self-supervised learning methods in videos☆114Updated this week
- ☆169Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆705Updated last year
- A Survey on multimodal learning research.☆315Updated last year