junchen14 / Multi-Modal-TransformerLinks
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and self-supervised learning models. Additionally, it also collects many useful tutorials and tools in these related domains.
☆230Updated 2 years ago
Alternatives and similar repositories for Multi-Modal-Transformer
Users that are interested in Multi-Modal-Transformer are comparing it to the libraries listed below
Sorting:
- Recent Advances in Vision and Language Pre-training (VLP)☆292Updated 2 years ago
- A Survey on multimodal learning research.☆328Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 2 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆401Updated last year
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆108Updated 3 years ago
- Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022☆148Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆264Updated 9 months ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆357Updated 11 months ago
- ☆176Updated 2 years ago
- ☆193Updated 2 years ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆261Updated 11 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆205Updated 2 years ago
- Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)☆132Updated last year
- Video Contrastive Learning with Global Context, ICCVW 2021☆158Updated 3 years ago
- PyTorch implementation of BEVT (CVPR 2022) https://arxiv.org/abs/2112.01529☆162Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆413Updated 2 years ago
- End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)☆219Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆165Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆124Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆374Updated 2 years ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆240Updated 3 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆135Updated 2 years ago
- Using VideoBERT to tackle video prediction☆129Updated 4 years ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆299Updated 3 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆283Updated 2 years ago
- Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers☆229Updated 3 years ago
- The official implementation of 'Align and Attend: Multimodal Summarization with Dual Contrastive Losses' (CVPR 2023)☆76Updated 2 years ago
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆129Updated last year
- Code release for ICCV 2021 paper "Anticipative Video Transformer"☆153Updated 3 years ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆181Updated last year