A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
☆718Apr 15, 2022Updated 3 years ago
Alternatives and similar repositories for train-CLIP
Users that are interested in train-CLIP are comparing it to the libraries listed below
Sorting:
- ☆48Aug 2, 2021Updated 4 years ago
- An open source implementation of CLIP.☆13,430Updated this week
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- OpenAI CLIP text encoders for multiple languages!☆826May 15, 2023Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆421Oct 28, 2022Updated 3 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,179May 20, 2024Updated last year
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Aug 8, 2023Updated 2 years ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,642Feb 18, 2026Updated last week
- Code for ALBEF: a new vision-language pre-training method☆1,754Sep 20, 2022Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,232Jun 28, 2024Updated last year
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- ☆1,048Oct 3, 2022Updated 3 years ago
- Simple implementation of OpenAI CLIP model in PyTorch.☆720Oct 18, 2025Updated 4 months ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,369Oct 19, 2025Updated 4 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,025Apr 12, 2024Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆415Jul 14, 2025Updated 7 months ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆374Jul 29, 2023Updated 2 years ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,730Aug 15, 2025Updated 6 months ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,554Apr 24, 2024Updated last year
- CLIP-like model evaluation☆802Jan 15, 2026Updated last month
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆544Sep 15, 2023Updated 2 years ago
- WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique imag…☆1,100Sep 27, 2024Updated last year
- Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".☆1,999Mar 21, 2024Updated last year
- [CVPR 2022] Official PyTorch Implementation for DiffusionCLIP: Text-guided Image Manipulation Using Diffusion Models☆867Mar 27, 2023Updated 2 years ago
- ☆65Nov 4, 2021Updated 4 years ago
- Simple image captioning model☆1,408Jun 9, 2024Updated last year
- Grounded Language-Image Pre-training☆2,572Jan 24, 2024Updated 2 years ago
- CLIP (Contrastive Language–Image Pre-training) for Italian☆185May 11, 2023Updated 2 years ago
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆858Sep 30, 2021Updated 4 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,681Aug 5, 2024Updated last year
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Jan 15, 2021Updated 5 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆407Nov 10, 2023Updated 2 years ago
- Taming Transformers for High-Resolution Image Synthesis☆6,434Jul 30, 2024Updated last year
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆224Sep 9, 2021Updated 4 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,527Apr 3, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,024Sep 29, 2022Updated 3 years ago