Zasder3 / train-CLIPLinks
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
☆713Updated 3 years ago
Alternatives and similar repositories for train-CLIP
Users that are interested in train-CLIP are comparing it to the libraries listed below
Sorting:
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,181Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆666Updated 3 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆716Updated 2 years ago
- Robust fine-tuning of zero-shot models☆744Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆782Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆869Updated 2 years ago
- Simple image captioning model☆1,394Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,218Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆403Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆748Updated 3 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆575Updated last year
- CLIP-like model evaluation☆779Updated 2 months ago
- ☆1,033Updated 3 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,265Updated 3 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆772Updated 3 years ago
- OpenAI CLIP text encoders for multiple languages!☆813Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆404Updated 3 months ago
- Code for ALBEF: a new vision-language pre-training method☆1,718Updated 3 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆930Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,499Updated last year
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆460Updated 3 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,654Updated last week
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆428Updated 2 years ago
- Code to train CLIP model☆122Updated 3 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆485Updated 2 years ago
- ☆639Updated last year
- ☆550Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆991Updated last year
- Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP☆476Updated 3 years ago