zerovl / ZeroVLLinks
[ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources
☆46Updated 3 years ago
Alternatives and similar repositories for ZeroVL
Users that are interested in ZeroVL are comparing it to the libraries listed below
Sorting:
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Updated 2 weeks ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 3 years ago
- [ECCV2022] New benchmark for evaluating pre-trained model; New supervised contrastive learning framework.☆110Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 3 years ago
- A Python toolkit for the OmniLabel benchmark providing code for evaluation and visualization☆22Updated 11 months ago
- Rethinking Nearest Neighbors for Visual Classification☆31Updated 4 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- ☆65Updated 2 years ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- The Pytorch implementation for "Video-Text Pre-training with Learned Regions"☆42Updated 3 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in th…☆69Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆61Updated 2 years ago
- ☆110Updated 3 years ago
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆208Updated 3 years ago
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models☆128Updated 2 months ago
- Replication of Pix2Seq with Pretrained Model☆59Updated 4 years ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆33Updated 2 years ago
- ☆73Updated 3 years ago
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆131Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- Toolkit for Elevater Benchmark☆76Updated 2 years ago
- ☆92Updated 2 years ago
- [CVPR 2022] The code for our paper 《Object-aware Video-language Pre-training for Retrieval》☆62Updated 3 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated 2 years ago
- PyTorch code for MUST☆108Updated 8 months ago
- ☆32Updated 3 years ago
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆53Updated 2 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Updated 4 years ago