j-min / CLIP-Caption-RewardLinks
PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
☆246Updated 6 months ago
Alternatives and similar repositories for CLIP-Caption-Reward
Users that are interested in CLIP-Caption-Reward are comparing it to the libraries listed below
Sorting:
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆259Updated 3 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆98Updated 2 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆86Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆374Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆411Updated 5 months ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆197Updated 2 years ago
- Generate text captions for images from their embeddings.☆117Updated 2 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆143Updated 6 months ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆223Updated 4 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 7 months ago
- L-Verse: Bidirectional Generation Between Image and Text☆107Updated 8 months ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆168Updated 2 years ago
- Image Captioning Using Transformer☆271Updated 3 years ago
- CLIPScore EMNLP code☆243Updated 3 years ago
- ☆162Updated 3 years ago
- A length-controllable and non-autoregressive image captioning model.☆68Updated 4 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆419Updated 3 years ago
- Reliably download millions of images efficiently☆118Updated 4 years ago
- VisualGPT, CVPR 2022 Proceeding, GPT as a decoder for vision-language models☆339Updated 2 years ago
- MERLOT: Multimodal Neural Script Knowledge Models☆225Updated 3 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆223Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆577Updated 2 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- project page for VinVL☆359Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆139Updated 2 weeks ago
- Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)☆183Updated 2 years ago
- Flickr30K Entities Dataset☆181Updated 7 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆176Updated 3 years ago