clip-vil / CLIP-ViL
[ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383
☆401Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for CLIP-ViL
- project page for VinVL☆350Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆637Updated 2 years ago
- A PyTorch reimplementation of bottom-up-attention models☆294Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆289Updated last year
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆705Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆450Updated last year
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆362Updated last year
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆231Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆389Updated last year
- METER: A Multimodal End-to-end TransformER Framework☆362Updated 2 years ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆185Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆202Updated last year
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆206Updated 2 years ago
- Grid features pre-training code for visual question answering☆268Updated 3 years ago
- image scene graph generation benchmark☆390Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆260Updated last month
- Reliably download millions of images efficiently☆113Updated 3 years ago
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆415Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆163Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆269Updated 2 years ago
- ☆979Updated 2 years ago
- ☆190Updated 6 months ago
- PyTorch bottom-up attention with Detectron2☆231Updated 2 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆280Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆185Updated 2 years ago
- Flickr30K Entities Dataset☆166Updated 5 years ago
- An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.☆111Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆186Updated 9 months ago
- Referring Expression Datasets API☆466Updated 2 months ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆254Updated 2 years ago