dandelin / ViLTLinks
Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"
☆1,471Updated last year
Alternatives and similar repositories for ViLT
Users that are interested in ViLT are comparing it to the libraries listed below
Sorting:
- Code for ALBEF: a new vision-language pre-training method☆1,667Updated 2 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,152Updated 2 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆793Updated 3 years ago
- Multi Task Vision and Language☆813Updated 3 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆478Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆657Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆371Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,979Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,272Updated 3 years ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆954Updated 2 years ago
- ☆1,009Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,150Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆958Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,204Updated 11 months ago
- A curated list of Multimodal Related Research.☆1,355Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,502Updated last year
- Oscar and VinVL☆1,049Updated last year
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆722Updated last year
- awesome grounding: A curated list of research papers in visual grounding☆1,078Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆412Updated 2 years ago
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆741Updated 2 years ago
- Recent Transformer-based CV and related works.☆1,333Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆853Updated last year
- [ACL'19] [PyTorch] Multimodal Transformer☆889Updated 2 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119