shonenkov / CLIP-ODSLinks
CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS
☆138Updated 3 years ago
Alternatives and similar repositories for CLIP-ODS
Users that are interested in CLIP-ODS are comparing it to the libraries listed below
Sorting:
- PyTorch code for MUST☆107Updated 3 months ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆131Updated 2 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Generate text captions for images from their embeddings.☆111Updated 2 years ago
- ☆46Updated 4 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆78Updated 3 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.☆179Updated 3 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated 4 months ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆286Updated last year
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆101Updated 3 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆219Updated 2 years ago
- Release of ImageNet-Captions☆50Updated 2 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 2 years ago
- [NeurIPS 2022] The official implementation of "Learning to Discover and Detect Objects".☆111Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆394Updated 3 weeks ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆171Updated last month
- 1st Place Solution in Google Universal Image Embedding☆67Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆316Updated last year
- understanding model mistakes with human annotations☆106Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆169Updated 3 years ago
- Easily compute clip embeddings from video frames☆145Updated last year
- Pytorch implementation of LOST unsupervised object discovery method☆248Updated 2 years ago
- A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".☆83Updated last year
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 2 years ago
- This repo contains the code and configuration files for reproducing object detection results of FocalNets with DINO☆67Updated 2 years ago
- ☆61Updated 3 years ago