TheoCoombes / ClipCapLinks
Using pretrained encoder and language models to generate captions from multimedia inputs.
☆97Updated 2 years ago
Alternatives and similar repositories for ClipCap
Users that are interested in ClipCap are comparing it to the libraries listed below
Sorting:
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated 2 months ago
- ☆97Updated last week
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆220Updated last year
- Easily compute clip embeddings from video frames☆145Updated last year
- ☆160Updated 3 years ago
- Refactoring dalle-pytorch and taming-transformers for TPU VM☆60Updated 3 years ago
- Use CLIP to represent video for Retrieval Task☆69Updated 4 years ago
- Finetune glide-text2im from openai on your own data.☆89Updated 2 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆85Updated 2 years ago
- CLOOB training (JAX) and inference (JAX and PyTorch)☆72Updated 3 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆197Updated last year
- ☆46Updated 3 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆48Updated last year
- Inverts CLIP text embeds to image embeds and visualizes with deep-image-prior.☆35Updated 2 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Updated 3 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆125Updated 2 years ago
- CLIP-Art: Contrastive Pre-training for Fine-Grained Art Classification - 4th Workshop on Computer Vision for Fashion, Art, and Design☆27Updated 3 years ago
- ☆50Updated 2 years ago
- Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.☆60Updated 3 years ago
- Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch☆64Updated 3 years ago
- PyTorch code for MUST☆107Updated last month
- Simple script to compute CLIP-based scores given a DALL-e trained model.☆30Updated 4 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆140Updated 2 weeks ago
- Let's make a video clip☆94Updated 2 years ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 2 years ago
- Generate text captions for images from their embeddings.☆108Updated last year
- ☆47Updated last month
- Language Models Can See: Plugging Visual Controls in Text Generation☆256Updated 3 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 2 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 3 years ago