Sense-GVT / DeCLIPLinks
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
☆666Updated 2 years ago
Alternatives and similar repositories for DeCLIP
Users that are interested in DeCLIP are comparing it to the libraries listed below
Sorting:
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆708Updated 3 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆768Updated 3 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆536Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆483Updated 2 years ago
- ☆1,021Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆778Updated 2 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆743Updated 3 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆426Updated 2 years ago
- ☆637Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆783Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆397Updated last month
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆415Updated 2 years ago
- ☆542Updated 3 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆265Updated 10 months ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆863Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆712Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆219Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆372Updated 2 years ago
- Robust fine-tuning of zero-shot models☆731Updated 3 years ago
- CLIP-like model evaluation☆759Updated 2 weeks ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆721Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,168Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,209Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆922Updated last year
- EsViT: Efficient self-supervised Vision Transformers☆413Updated 2 years ago
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆332Updated last year
- Multi-modality pre-training☆502Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆284Updated last year