Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
☆675Sep 19, 2022Updated 3 years ago
Alternatives and similar repositories for DeCLIP
Users that are interested in DeCLIP are comparing it to the libraries listed below
Sorting:
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆543Sep 15, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,572Jan 24, 2024Updated 2 years ago
- BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training☆400Oct 23, 2024Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,554Apr 24, 2024Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆421Oct 28, 2022Updated 3 years ago
- ☆1,048Oct 3, 2022Updated 3 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,179May 20, 2024Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆782May 10, 2022Updated 3 years ago
- Code for ALBEF: a new vision-language pre-training method☆1,754Sep 20, 2022Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆407Nov 10, 2023Updated 2 years ago
- Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".☆1,999Mar 21, 2024Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆718Apr 15, 2022Updated 3 years ago
- An open source implementation of CLIP.☆13,397Feb 20, 2026Updated last week
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆268Oct 2, 2024Updated last year
- [ICCV 2023] You Only Look at One Partial Sequence☆343Oct 21, 2023Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆319Jun 3, 2024Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆615Dec 13, 2022Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,024Sep 29, 2022Updated 3 years ago
- Object detection on multiple datasets with an automatically learned unified label space.☆516Mar 8, 2024Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆765Apr 14, 2022Updated 3 years ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,343Oct 5, 2023Updated 2 years ago
- [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations☆565Aug 22, 2025Updated 6 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,677Aug 5, 2024Updated last year
- [Under preparation] Code repo for "Open-Vocabulary DETR with Conditional Matching" (ECCV 2022)☆237Aug 3, 2022Updated 3 years ago
- COYO-700M: Large-scale Image-Text Pair Dataset☆1,251Nov 30, 2022Updated 3 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆524Mar 14, 2023Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Aug 8, 2023Updated 2 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆463May 9, 2022Updated 3 years ago
- OpenMMLab Self-Supervised Learning Toolbox and Benchmark☆3,297Jun 25, 2023Updated 2 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.☆182Apr 17, 2022Updated 3 years ago
- OpenAI CLIP text encoders for multiple languages!☆826May 15, 2023Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆415Jul 14, 2025Updated 7 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,232Jun 28, 2024Updated last year
- Omnivore: A Single Model for Many Visual Modalities☆571Nov 12, 2022Updated 3 years ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,369Oct 19, 2025Updated 4 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆492Nov 25, 2022Updated 3 years ago