junchen14 / Multi-Modal-Transformer
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and self-supervised learning models. Additionally, it also collects many useful tutorials and tools in these related domains.
☆226Updated 2 years ago
Alternatives and similar repositories for Multi-Modal-Transformer:
Users that are interested in Multi-Modal-Transformer are comparing it to the libraries listed below
- Multi-Modal Transformer for Video Retrieval☆259Updated 4 months ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆187Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆292Updated last year
- End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)☆213Updated last year
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆343Updated 6 months ago
- A Survey on multimodal learning research.☆320Updated last year
- project page for VinVL☆351Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆149Updated 10 months ago
- METER: A Multimodal End-to-end TransformER Framework☆366Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆260Updated 4 months ago
- Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers☆227Updated 2 years ago
- ☆192Updated 2 years ago
- Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022☆147Updated 2 years ago
- PyTorch implementation of BEVT (CVPR 2022) https://arxiv.org/abs/2112.01529☆158Updated 2 years ago
- Video Contrastive Learning with Global Context, ICCVW 2021☆157Updated 2 years ago
- PyTorch GPU distributed training code for MIL-NCE HowTo100M☆215Updated 2 years ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆243Updated 6 months ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆410Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆391Updated last year
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆713Updated last year
- Implementation of ViViT: A Video Vision Transformer☆521Updated 3 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval". CVPR 2022☆98Updated 2 years ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆357Updated 2 years ago
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆131Updated 3 years ago
- Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)☆127Updated 11 months ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆177Updated last year
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆289Updated 2 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆280Updated last year
- ☆67Updated 3 years ago
- image scene graph generation benchmark☆394Updated 2 years ago