shonenkov / CLIP-ODSLinks
CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS
☆140Updated 3 years ago
Alternatives and similar repositories for CLIP-ODS
Users that are interested in CLIP-ODS are comparing it to the libraries listed below
Sorting:
- PyTorch code for MUST☆107Updated 6 months ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆133Updated 2 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- ☆48Updated 4 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.☆182Updated 3 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆175Updated 3 years ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆288Updated last year
- Generate text captions for images from their embeddings.☆116Updated 2 years ago
- CLIP-Art: Contrastive Pre-training for Fine-Grained Art Classification - 4th Workshop on Computer Vision for Fashion, Art, and Design☆28Updated 3 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆102Updated 3 years ago
- Official repository of the paper "GPR1200: A Benchmark for General-PurposeContent-Based Image Retrieval"☆29Updated 7 months ago
- Release of ImageNet-Captions☆51Updated 2 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- Optimized library for large-scale extraction of frames and audio from video.☆205Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 5 months ago
- ☆61Updated 4 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆222Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆319Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆405Updated 4 months ago
- ☆47Updated 5 months ago
- Easily compute clip embeddings from video frames☆147Updated 2 years ago
- [NeurIPS 2022] The official implementation of "Learning to Discover and Detect Objects".☆111Updated 2 years ago
- [NeurIPS'22] ReCo: Retrieve and Co-segment for Zero-shot Transfer☆62Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆80Updated 3 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆109Updated 7 months ago
- ☆275Updated 11 months ago
- 1st Place Solution in Google Universal Image Embedding☆67Updated 2 years ago