fkodom / clip-text-decoderLinks
Generate text captions for images from their embeddings.
☆108Updated last year
Alternatives and similar repositories for clip-text-decoder
Users that are interested in clip-text-decoder are comparing it to the libraries listed below
Sorting:
- ☆54Updated 2 years ago
- ☆50Updated 2 years ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆125Updated 7 months ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆133Updated 2 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated last year
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆169Updated last year
- ☆120Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆168Updated 2 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆140Updated 2 weeks ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆197Updated last year
- Retrieval augmented diffusion from CompVis.☆53Updated 2 years ago
- Official implementation of the paper "Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models☆172Updated last year
- Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)☆183Updated 2 years ago
- CLIPScore EMNLP code☆226Updated 2 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 3 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆167Updated this week
- JAX implementation ViT-VQGAN☆83Updated 2 years ago
- Densely Captioned Images (DCI) dataset repository.☆185Updated 11 months ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated last year
- Release of ImageNet-Captions☆49Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- 🤗 Unofficial huggingface/diffusers-based implementation of the paper "Training-Free Structured Diffusion Guidance for Compositional Text…☆120Updated 2 years ago
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- PyTorch code for MUST☆107Updated last month
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆276Updated 2 years ago
- Official codebase for the Paper “Retrieval-Augmented Diffusion Models”☆131Updated 2 years ago
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆72Updated 2 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆314Updated last year
- Simple script to compute CLIP-based scores given a DALL-e trained model.☆30Updated 4 years ago
- Supercharged BLIP-2 that can handle videos☆118Updated last year