fkodom / clip-text-decoder
Generate text captions for images from their embeddings.
☆106Updated last year
Alternatives and similar repositories for clip-text-decoder
Users that are interested in clip-text-decoder are comparing it to the libraries listed below
Sorting:
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆168Updated 2 years ago
- ☆50Updated 2 years ago
- PyTorch code for MUST☆106Updated last week
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆133Updated 2 years ago
- ☆47Updated 4 years ago
- ☆118Updated 2 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆162Updated last year
- CLIPScore EMNLP code☆221Updated 2 years ago
- ☆54Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆166Updated last year
- ☆157Updated 2 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆100Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆196Updated last year
- CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS☆139Updated 3 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆275Updated 2 years ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆58Updated last year
- Sparse Linear Concept Embeddings☆95Updated last month
- ☆80Updated 5 months ago
- 🤗 Unofficial huggingface/diffusers-based implementation of the paper "Training-Free Structured Diffusion Guidance for Compositional Text…☆120Updated 2 years ago
- TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering☆160Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆315Updated 11 months ago
- [ICCV 2023] Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models☆84Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆390Updated 2 years ago
- Training code for CLIP-FlanT5☆26Updated 9 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- Official implementation of the paper "Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models☆170Updated last year
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆278Updated last year
- Official repository of paper "Subobject-level Image Tokenization"☆70Updated last month