zengyan-97 / X-VLMView external linksLinks
X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)
☆491Nov 25, 2022Updated 3 years ago
Alternatives and similar repositories for X-VLM
Users that are interested in X-VLM are comparing it to the libraries listed below
Sorting:
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆167Aug 22, 2024Updated last year
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆268Oct 2, 2024Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,752Sep 20, 2022Updated 3 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆92Jun 12, 2023Updated 2 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Jul 16, 2022Updated 3 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,555Apr 24, 2024Updated last year
- METER: A Multimodal End-to-end TransformER Framework☆375Nov 16, 2022Updated 3 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆281Mar 25, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,573Jan 24, 2024Updated 2 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆800Jun 30, 2021Updated 4 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188May 1, 2025Updated 9 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆674Sep 19, 2022Updated 3 years ago
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆208Sep 30, 2022Updated 3 years ago
- Cross Modal Retrieval with Querybank Normalisation☆57Nov 21, 2023Updated 2 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆543Sep 15, 2023Updated 2 years ago
- Oscar and VinVL☆1,052Aug 28, 2023Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,524Apr 3, 2024Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,672Aug 5, 2024Updated last year
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,155Aug 19, 2022Updated 3 years ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆952Mar 19, 2025Updated 10 months ago
- project page for VinVL☆359Jul 26, 2023Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Aug 8, 2023Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆421Oct 28, 2022Updated 3 years ago
- [ACL 2023] Code and data for our paper "Measuring Progress in Fine-grained Vision-and-Language Understanding"☆13Jun 11, 2023Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,166Nov 18, 2024Updated last year
- Bridging Vision and Language Model☆286Mar 27, 2023Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Mar 6, 2023Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,200Dec 12, 2023Updated 2 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- [ECCV2022] New benchmark for evaluating pre-trained model; New supervised contrastive learning framework.☆110Dec 8, 2023Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆209Dec 18, 2022Updated 3 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Dec 16, 2025Updated 2 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆639Sep 21, 2024Updated last year
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆292Jun 7, 2023Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆415Jul 14, 2025Updated 7 months ago
- ☆1,047Oct 3, 2022Updated 3 years ago
- Grid features pre-training code for visual question answering☆269Sep 17, 2021Updated 4 years ago