j-min / CLIP-Caption-RewardLinks
PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
☆244Updated 2 months ago
Alternatives and similar repositories for CLIP-Caption-Reward
Users that are interested in CLIP-Caption-Reward are comparing it to the libraries listed below
Sorting:
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 2 years ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆258Updated 3 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆198Updated last year
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆397Updated last month
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆85Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆373Updated 2 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated 4 months ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- Generate text captions for images from their embeddings.☆114Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆222Updated 3 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆141Updated 2 months ago
- CLIPScore EMNLP code☆237Updated 2 years ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆195Updated 2 years ago
- ☆159Updated 3 years ago
- VisualGPT, CVPR 2022 Proceeding, GPT as a decoder for vision-language models☆336Updated 2 years ago
- project page for VinVL☆357Updated 2 years ago
- Reliably download millions of images efficiently☆117Updated 4 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆415Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆172Updated 3 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 3 months ago
- Data repository for the VALSE benchmark.☆37Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- MERLOT: Multimodal Neural Script Knowledge Models☆224Updated 3 years ago
- Extended COCO Validation (ECCV) Caption dataset (ECCV 2022)☆56Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- Image Captioning Using Transformer☆269Updated 3 years ago
- Flickr30K Entities Dataset☆177Updated 6 years ago