yuewang-cuhk / awesome-vision-language-pretraining-papers
Recent Advances in Vision and Language PreTrained Models (VL-PTMs)
☆1,152Updated 2 years ago
Alternatives and similar repositories for awesome-vision-language-pretraining-papers
Users that are interested in awesome-vision-language-pretraining-papers are comparing it to the libraries listed below
Sorting:
- A curated list of Multimodal Related Research.☆1,348Updated last year
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆792Updated 3 years ago
- A curated list of awesome vision and language resources (still under construction... stay tuned!)☆535Updated 6 months ago
- awesome grounding: A curated list of research papers in visual grounding☆1,073Updated 2 years ago
- A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Common…☆662Updated last year
- Oscar and VinVL☆1,049Updated last year
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆950Updated 2 years ago
- Multi Task Vision and Language☆812Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆719Updated last year
- Recent Transformer-based CV and related works.☆1,331Updated last year
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆740Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,461Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,650Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆292Updated last year
- A curated list of deep learning resources for video-text retrieval.☆618Updated last year
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆413Updated 2 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆536Updated 2 years ago
- A Survey on multimodal learning research.☆326Updated last year
- ☆476Updated 2 years ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆920Updated last year
- A comprehensive list of awesome contrastive self-supervised learning papers.☆1,269Updated 8 months ago
- image scene graph generation benchmark☆396Updated 2 years ago
- Vision-Language Pre-training for Image Captioning and Question Answering☆418Updated 3 years ago
- A curated list of image captioning and related area resources. :-)☆1,071Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆411Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,196Updated 10 months ago
- METER: A Multimodal End-to-end TransformER Framework☆369Updated 2 years ago
- The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretr…☆423Updated 5 months ago
- A PyTorch reimplementation of bottom-up-attention models☆301Updated 3 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,594Updated last week