shonenkov / CLIP-ODS
CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS
☆139Updated 2 years ago
Alternatives and similar repositories for CLIP-ODS:
Users that are interested in CLIP-ODS are comparing it to the libraries listed below
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆309Updated 7 months ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 3 years ago
- Release of ImageNet-Captions☆45Updated 2 years ago
- Generate text captions for images from their embeddings.☆102Updated last year
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆273Updated 11 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆212Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆94Updated last year
- Easily compute clip embeddings from video frames☆140Updated last year
- PyTorch code for MUST☆106Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆133Updated last year
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆306Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆390Updated last year
- Let's make a video clip☆93Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆158Updated last year
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆127Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆377Updated last year
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆285Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆189Updated last year
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆158Updated 2 years ago
- ☆62Updated 3 years ago
- 1st Place Solution in Google Universal Image Embedding☆62Updated last year
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆215Updated 8 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆98Updated last year
- Densely Captioned Images (DCI) dataset repository.☆168Updated 6 months ago
- [NeurIPS 2022] The official implementation of "Learning to Discover and Detect Objects".☆108Updated last year
- Pytorch implementation of LOST unsupervised object discovery method☆239Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆272Updated 2 years ago
- ☆50Updated 2 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.