dandelin / ViLTLinks
Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"
☆1,524Updated last year
Alternatives and similar repositories for ViLT
Users that are interested in ViLT are comparing it to the libraries listed below
Sorting:
- Code for ALBEF: a new vision-language pre-training method☆1,751Updated 3 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆490Updated 3 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,155Updated 3 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆800Updated 4 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,023Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,199Updated 2 years ago
- Multi Task Vision and Language☆825Updated 3 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,169Updated last year
- Oscar and VinVL☆1,052Updated 2 years ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆966Updated 3 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,555Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,315Updated 4 years ago
- awesome grounding: A curated list of research papers in visual grounding