junchen14 / Multi-Modal-TransformerLinks
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and self-supervised learning models. Additionally, it also collects many useful tutorials and tools in these related domains.
☆226Updated 2 years ago
Alternatives and similar repositories for Multi-Modal-Transformer
Users that are interested in Multi-Modal-Transformer are comparing it to the libraries listed below
Sorting:
- Multi-Modal Transformer for Video Retrieval☆260Updated 7 months ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆187Updated last month
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆180Updated last year
- ☆192Updated 2 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆355Updated 10 months ago
- PyTorch implementation of BEVT (CVPR 2022) https://arxiv.org/abs/2112.01529☆161Updated 2 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆105Updated 2 years ago
- PyTorch GPU distributed training code for MIL-NCE HowTo100M☆218Updated 2 years ago
- Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022☆148Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated last year
- Code release for ICCV 2021 paper "Anticipative Video Transformer"☆153Updated 3 years ago
- A Survey on multimodal learning research.☆328Updated last year
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆123Updated 2 years ago
- S3D Text-Video model trained on HowTo100M using MIL-NCE☆195Updated 4 years ago
- A video database bridging human actions and human-object relationships☆143Updated 4 years ago
- End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)☆220Updated last year
- "Object-Region Video Transformers”, Herzig et al., CVPR 2022☆45Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆369Updated 2 years ago
- Video Contrastive Learning with Global Context, ICCVW 2021☆158Updated 3 years ago
- ☆176Updated 2 years ago
- https://layer6ai-labs.github.io/xpool/☆124Updated last year
- Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers☆229Updated 2 years ago
- [NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.☆288Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection☆135Updated 10 months ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆262Updated 8 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆399Updated last year
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆121Updated last year
- Implementation of ViViT: A Video Vision Transformer☆537Updated 3 years ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆276Updated last year