kevinzakka / clip_playgroundLinks
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities
☆175Updated 3 years ago
Alternatives and similar repositories for clip_playground
Users that are interested in clip_playground are comparing it to the libraries listed below
Sorting:
- Generate text captions for images from their embeddings.☆116Updated 2 years ago
- ☆120Updated 2 years ago
- ☆230Updated last year
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆143Updated 5 months ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- [CVPR 2023] Learning Visual Representations via Language-Guided Sampling☆149Updated 2 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- ☆53Updated 3 years ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆289Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆187Updated 2 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- ☆238Updated 6 months ago
- ☆47Updated 6 months ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆98Updated 2 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆133Updated 2 years ago
- ☆47Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆87Updated last year
- L-Verse: Bidirectional Generation Between Image and Text☆109Updated 8 months ago
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- ☆190Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆407Updated 4 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆208Updated 2 years ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆133Updated 3 years ago
- https://arxiv.org/abs/2209.15162☆53Updated 2 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆49Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆183Updated 5 months ago
- ☆65Updated 2 years ago
- PyTorch code for MUST☆107Updated 7 months ago
- [NeurIPS 2021 Spotlight] Learning to Compose Visual Relations☆102Updated 2 years ago