shonenkov / CLIP-ODS
CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS
☆139Updated 3 years ago
Alternatives and similar repositories for CLIP-ODS:
Users that are interested in CLIP-ODS are comparing it to the libraries listed below
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 3 years ago
- PyTorch code for MUST☆106Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆312Updated 9 months ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆163Updated 2 years ago
- ☆47Updated 4 years ago
- Pytorch implementation of LOST unsupervised object discovery method☆242Updated last year
- [NeurIPS 2022] The official implementation of "Learning to Discover and Detect Objects".☆110Updated last year
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆276Updated last year
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆309Updated last year
- Release of ImageNet-Captions☆45Updated 2 years ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆127Updated 2 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆218Updated 10 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆136Updated 2 years ago
- Generate text captions for images from their embeddings.☆105Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆213Updated 2 years ago
- GRiT: A Generative Region-to-text Transformer for Object Understanding (https://arxiv.org/abs/2212.00280)☆317Updated last year
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆94Updated 2 years ago
- ☆269Updated 3 months ago
- ☆217Updated 3 years ago
- ☆46Updated 3 years ago
- (CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"☆313Updated last year
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆290Updated 2 years ago
- Easily compute clip embeddings from video frames☆143Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆273Updated 2 years ago
- [CVPR 2023] implementation of Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information.☆90Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆393Updated last year
- Simple Implementation of Pix2Seq model for object detection in PyTorch☆123Updated last year
- A new framework for open-vocabulary object detection, based on maskrcnn-benchmark☆238Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆384Updated 2 years ago
- Baby-DALL3: Annotation anything in visual tasks and Generate anything just all in one-pipeline with GPT-4 (a small baby of DALL·E 3).☆82Updated last year