YoadTew / zero-shot-image-to-text
Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic
☆273Updated 2 years ago
Alternatives and similar repositories for zero-shot-image-to-text:
Users that are interested in zero-shot-image-to-text are comparing it to the libraries listed below
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆191Updated last year
- Language Models Can See: Plugging Visual Controls in Text Generation☆256Updated 2 years ago
- CLIPScore EMNLP code☆217Updated 2 years ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆189Updated last year
- Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)☆181Updated 2 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated 2 years ago
- An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.☆121Updated 2 months ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆242Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆411Updated 2 years ago
- ☆98Updated 4 months ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆152Updated 11 months ago
- ☆117Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆649Updated 2 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆140Updated last year
- ☆76Updated 2 years ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆130Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆368Updated last year
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆123Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆384Updated 2 years ago
- project page for VinVL☆353Updated last year
- Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors☆334Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆185Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆393Updated last year
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆94Updated 2 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆132Updated last year
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆359Updated 2 years ago
- Flickr30K Entities Dataset☆169Updated 6 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆187Updated last year