junchen14 / Multi-Modal-TransformerLinks
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and self-supervised learning models. Additionally, it also collects many useful tutorials and tools in these related domains.
☆232Updated 3 years ago
Alternatives and similar repositories for Multi-Modal-Transformer
Users that are interested in Multi-Modal-Transformer are comparing it to the libraries listed below
Sorting:
- Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022☆152Updated 3 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆115Updated 3 years ago
- Video Contrastive Learning with Global Context, ICCVW 2021☆161Updated 3 years ago
- End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)☆226Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 7 months ago
- PyTorch implementation of BEVT (CVPR 2022) https://arxiv.org/abs/2112.01529☆164Updated 3 years ago
- Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)☆135Updated last year
- ☆180Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆403Updated 2 years ago
- ☆193Updated 3 years ago
- An unofficial implementation of TubeViT in "Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning"☆93Updated last year
- Code release for ICCV 2021 paper "Anticipative Video Transformer"☆155Updated 3 years ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆190Updated 2 years ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆272Updated last year
- ☆58Updated 2 weeks ago
- Recent Advances in Vision and Language Pre-training (VLP)☆296Updated 2 years ago
- Code for the paper: Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation.☆32Updated 2 years ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆305Updated 3 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆363Updated last year
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆267Updated last year
- Using VideoBERT to tackle video prediction☆133Updated 4 years ago
- CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021☆64Updated 3 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136Updated 2 years ago
- The official implementation of our paper "Sports Video Analysis on Large-scale Data" (https://arxiv.org/abs/2208.04897)☆79Updated 2 years ago
- A Survey on multimodal learning research.☆334Updated 2 years ago
- [NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection☆140Updated last year
- Official repository for "Self-Supervised Video Transformer" (CVPR'22)☆107Updated last year
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆131Updated 2 years ago
- ☆70Updated 4 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆419Updated 3 years ago