dandelin / ViLTLinks
Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"
☆1,505Updated last year
Alternatives and similar repositories for ViLT
Users that are interested in ViLT are comparing it to the libraries listed below
Sorting:
- Code for ALBEF: a new vision-language pre-training method☆1,730Updated 3 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,155Updated 3 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆798Updated 4 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,539Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,180Updated last year
- Multi Task Vision and Language☆820Updated 3 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆484Updated 2 years ago
- Oscar and VinVL☆1,050Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,102Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,001Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆875Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆722Updated 2 years ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆963Updated 3 years ago
- awesome grounding: A curated list of research papers in visual grounding☆1,120Updated last month
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,306Updated 3 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆715Updated 3 years ago
- METER: A Multimodal End-to-end TransformER Framework☆373Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆669Updated 3 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,184Updated 2 years ago
- ☆1,038Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,220Updated last year
- Simple image captioning model☆1,396Updated last year
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆746Updated 2 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,666Updated this week
- This is an official implementation for "Video Swin Transformers".☆1,594Updated 2 years ago
- The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretr…☆433Updated last month
- A curated list of deep learning resources for video-text retrieval.☆635Updated 2 years ago
- Recent Transformer-based CV and related works.☆1,334Updated 2 years ago
- ☆554Updated 3 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆417Updated 3 years ago