Simple image captioning model
☆1,413Jun 9, 2024Updated last year
Alternatives and similar repositories for CLIP_prefix_caption
Users that are interested in CLIP_prefix_caption are comparing it to the libraries listed below
Sorting:
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆203Jan 28, 2024Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Jun 10, 2025Updated 9 months ago
- 基于ClipCap的看图说话Image Caption模型☆321Apr 1, 2022Updated 3 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Sep 17, 2022Updated 3 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,557Apr 24, 2024Updated last year
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,861Feb 18, 2026Updated last month
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,694Mar 3, 2026Updated 2 weeks ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆198May 9, 2023Updated 2 years ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆138Mar 16, 2023Updated 3 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆420Oct 28, 2022Updated 3 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,189Nov 18, 2024Updated last year
- An open source implementation of CLIP.☆13,528Mar 12, 2026Updated last week
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆545Dec 21, 2022Updated 3 years ago
- Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning☆2,889Jul 28, 2022Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- Grounded Language-Image Pre-training☆2,580Jan 24, 2024Updated 2 years ago
- I decide to sync up this repo and self-critical.pytorch. (The old master is in old master branch for archive)☆1,482Oct 5, 2023Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆100Mar 11, 2023Updated 3 years ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,733Aug 15, 2025Updated 7 months ago
- ☆67Nov 11, 2022Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆677Sep 19, 2022Updated 3 years ago
- Repository for "Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search"☆179Sep 30, 2021Updated 4 years ago
- ☆59Aug 30, 2023Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆720Apr 15, 2022Updated 3 years ago
- A curated list of image captioning and related area resources. :-)☆1,074Mar 28, 2023Updated 2 years ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆258Jun 1, 2022Updated 3 years ago
- Code for ALBEF: a new vision-language pre-training method☆1,758Sep 20, 2022Updated 3 years ago
- Implementation of 'End-to-End Transformer Based Model for Image Captioning' [AAAI 2022]☆69Jun 1, 2024Updated last year
- Oscar and VinVL☆1,052Aug 28, 2023Updated 2 years ago
- ☆1,218May 13, 2024Updated last year
- OpenAI CLIP text encoders for multiple languages!☆828May 15, 2023Updated 2 years ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,380Oct 19, 2025Updated 5 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,184May 20, 2024Updated last year
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆61Oct 21, 2022Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,230Jun 28, 2024Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆808Mar 20, 2024Updated 2 years ago
- ☆3,051Feb 27, 2023Updated 3 years ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆339May 2, 2021Updated 4 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆210Dec 18, 2022Updated 3 years ago