kevinzakka / clip_playgroundLinks
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities
☆174Updated 3 years ago
Alternatives and similar repositories for clip_playground
Users that are interested in clip_playground are comparing it to the libraries listed below
Sorting:
- Generate text captions for images from their embeddings.☆115Updated 2 years ago
- ☆120Updated 2 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆142Updated 4 months ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆286Updated last year
- [CVPR 2023] Learning Visual Representations via Language-Guided Sampling☆149Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆201Updated last year
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- ☆227Updated last year
- ☆47Updated 5 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- ☆53Updated 3 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆184Updated 2 years ago
- ☆235Updated 4 months ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆97Updated 2 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆86Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- ☆46Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆405Updated 3 months ago
- Release of ImageNet-Captions☆51Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆178Updated 4 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆207Updated 2 years ago
- PyTorch code for MUST☆107Updated 5 months ago
- [NeurIPS 2021 Spotlight] Learning to Compose Visual Relations☆101Updated 2 years ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆133Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆416Updated 3 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆318Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 3 years ago
- ☆188Updated 2 years ago