shonenkov / CLIP-ODSLinks
CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS
☆139Updated 3 years ago
Alternatives and similar repositories for CLIP-ODS
Users that are interested in CLIP-ODS are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆314Updated last year
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 3 years ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆129Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- PyTorch code for MUST☆107Updated last month
- [ECCV'22] Official repository of paper titled "Class-agnostic Object Detection with Multi-modal Transformer".☆311Updated 2 years ago
- [NeurIPS 2022] Official repository of paper titled "Bridging the Gap between Object and Image-level Representations for Open-Vocabulary …☆292Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆218Updated 2 years ago
- ☆271Updated 6 months ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆394Updated 2 years ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆282Updated last year
- [NeurIPS 2022] The official implementation of "Learning to Discover and Detect Objects".☆111Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Generate text captions for images from their embeddings.☆108Updated last year
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆220Updated last year
- Release of ImageNet-Captions☆49Updated 2 years ago
- ☆176Updated 2 years ago
- This repository contains the official implementation of the NeurIPS'21 paper, ROADMAP: Robust and Decomposable Average Precision for Imag…☆75Updated 2 years ago
- (CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"☆313Updated 2 years ago
- This repo contains the code and configuration files for reproducing object detection results of FocalNets with DINO☆67Updated 2 years ago
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆327Updated last year
- BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training☆396Updated 8 months ago
- Pytorch implementation of LOST unsupervised object discovery method☆247Updated 2 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.☆179Updated 3 years ago
- Easily compute clip embeddings from video frames☆145Updated last year
- Open-source code for Generic Grouping Network (GGN, CVPR 2022)☆111Updated 3 months ago
- Optimized library for large-scale extraction of frames and audio from video.☆204Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆167Updated this week
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆771Updated last year
- Let's make a video clip☆93Updated 2 years ago