j-min / CLIP-Caption-RewardLinks
PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
☆242Updated 2 weeks ago
Alternatives and similar repositories for CLIP-Caption-Reward
Users that are interested in CLIP-Caption-Reward are comparing it to the libraries listed below
Sorting:
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆197Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆276Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated last month
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆193Updated 2 years ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆256Updated 3 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆371Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆394Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆222Updated 3 years ago
- Flickr30K Entities Dataset☆176Updated 6 years ago
- CLIPScore EMNLP code☆226Updated 2 years ago
- project page for VinVL☆355Updated last year
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆361Updated 3 years ago
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆177Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆412Updated 2 years ago
- Multi-modality pre-training☆495Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆568Updated last year
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆220Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.☆129Updated 5 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆205Updated 2 years ago
- ☆246Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated 2 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated 2 months ago
- ☆131Updated 2 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- Image Captioning Using Transformer☆268Updated 3 years ago
- VisualGPT, CVPR 2022 Proceeding, GPT as a decoder for vision-language models☆334Updated 2 years ago
- Reliably download millions of images efficiently☆116Updated 4 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆85Updated 2 years ago