Zasder3 / train-CLIP
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
☆677Updated 2 years ago
Alternatives and similar repositories for train-CLIP:
Users that are interested in train-CLIP are comparing it to the libraries listed below
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆643Updated 2 years ago
- Robust fine-tuning of zero-shot models☆665Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,168Updated 7 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆705Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,099Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆390Updated last year
- ☆496Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆820Updated last year
- Simple image captioning model☆1,337Updated 7 months ago
- CLIP-like model evaluation☆654Updated 5 months ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆746Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆754Updated last year
- OpenAI CLIP text encoders for multiple languages!☆777Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆555Updated last year
- ☆985Updated 2 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆700Updated 2 years ago
- ☆574Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,599Updated 2 years ago
- Simple implementation of OpenAI CLIP model in PyTorch.☆649Updated 9 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆904Updated 9 months ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆409Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆377Updated last year
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,855Updated 8 months ago
- [CVPR 2022] Official PyTorch Implementation for DiffusionCLIP: Text-guided Image Manipulation Using Diffusion Models☆819Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,228Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆733Updated 10 months ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,428Updated 9 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆410Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆892Updated last year
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆524Updated last year