Recent Advances in Vision and Language Pre-training (VLP)
☆295Jun 6, 2023Updated 2 years ago
Alternatives and similar repositories for awesome-Vision-and-Language-Pre-training
Users that are interested in awesome-Vision-and-Language-Pre-training are comparing it to the libraries listed below
Sorting:
- A curated list of awesome vision and language resources (still under construction... stay tuned!)☆560Nov 4, 2024Updated last year
- A curated list of vision-and-language pre-training (VLP). :-)☆62Jul 6, 2022Updated 3 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,155Aug 19, 2022Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,232Jun 28, 2024Updated last year
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆374Jul 29, 2023Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆376Nov 16, 2022Updated 3 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Dec 16, 2025Updated 2 months ago
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆55Oct 30, 2024Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.☆925Dec 18, 2023Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,554Apr 24, 2024Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆766Apr 14, 2022Updated 3 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆209Dec 18, 2022Updated 3 years ago
- Un-*** 50 billions multimodality dataset☆23Sep 14, 2022Updated 3 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆281Mar 25, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆544Sep 15, 2023Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆418Jul 14, 2025Updated 7 months ago
- awesome grounding: A curated list of research papers in visual grounding☆1,125Sep 21, 2025Updated 5 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Common…☆673Jul 6, 2023Updated 2 years ago
- Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training☆16Jul 1, 2025Updated 8 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,182May 20, 2024Updated last year
- Reading list for research topics in Masked Image Modeling☆338Dec 3, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,177Nov 18, 2024Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,756Sep 20, 2022Updated 3 years ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,681Aug 5, 2024Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,527Apr 3, 2024Updated last year
- 🥉 Codalab-Microsoft-COCO-Image-Captioning-Challenge 3rd place solution(06.30.21)☆23Apr 6, 2022Updated 3 years ago
- COYO-700M: Large-scale Image-Text Pair Dataset☆1,252Nov 30, 2022Updated 3 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆44Apr 30, 2023Updated 2 years ago
- Reading list for research topics in multimodal machine learning☆6,824Aug 20, 2024Updated last year
- Directed masked autoencoders☆14Feb 20, 2026Updated 2 weeks ago
- A subset of YFCC100M. Tools, checking scripts and links of web drive to download datasets(uncompressed).☆19Nov 13, 2024Updated last year
- ☆15Sep 7, 2022Updated 3 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Jun 10, 2025Updated 8 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,699Feb 23, 2026Updated last week