shonenkov / CLIP-ODS
CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS
☆139Updated 3 years ago
Alternatives and similar repositories for CLIP-ODS
Users that are interested in CLIP-ODS are comparing it to the libraries listed below
Sorting:
- PyTorch code for MUST☆106Updated last week
- Generate text captions for images from their embeddings.☆106Updated last year
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 3 years ago
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆310Updated 2 years ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆127Updated 2 years ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆281Updated last year
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Pytorch implementation of LOST unsupervised object discovery method☆244Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆215Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆315Updated 11 months ago
- (CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"☆313Updated 2 years ago
- ☆47Updated 4 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆220Updated 11 months ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆390Updated 2 years ago
- ☆157Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆397Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆764Updated last year
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆290Updated 2 years ago
- Release of ImageNet-Captions☆48Updated 2 years ago
- Easily compute clip embeddings from video frames☆145Updated last year
- Let's make a video clip☆93Updated 2 years ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- source code for ICLR'22 paper "VOS: Learning What You Don’t Know by Virtual Outlier Synthesis"☆313Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆166Updated last year
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated last month
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆168Updated 2 years ago
- ☆269Updated 5 months ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆275Updated 2 years ago
- [NeurIPS 2022] The official implementation of "Learning to Discover and Detect Objects".☆110Updated last year