moein-shariatnia / OpenAI-CLIPLinks
Simple implementation of OpenAI CLIP model in PyTorch.
☆716Updated 2 months ago
Alternatives and similar repositories for OpenAI-CLIP
Users that are interested in OpenAI-CLIP are comparing it to the libraries listed below
Sorting:
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆717Updated 3 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,188Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,229Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,273Updated 3 years ago
- Robust fine-tuning of zero-shot models☆756Updated 3 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆884Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆718Updated 2 years ago
- Simple image captioning model☆1,404Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,677Updated this week
- GIT: A Generative Image-to-text Transformer for Vision and Language☆578Updated 2 years ago
- CLIP-like model evaluation☆792Updated 2 weeks ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆784Updated 2 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆934Updated 2 years ago
- OpenAI CLIP text encoders for multiple languages!☆823Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,134Updated last year
- [CVPR 2022] Official PyTorch Implementation for DiffusionCLIP: Text-guided Image Manipulation Using Diffusion Models☆865Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆410Updated 5 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆669Updated 3 years ago
- Explainability for Vision Transformers☆1,021Updated 3 years ago
- Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch☆537Updated 2 years ago
- Grounded Language-Image Pre-training☆2,560Updated last year
- Easily compute clip embeddings and build a clip retrieval system with them☆2,708Updated 4 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,350Updated last year
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆822Updated 3 years ago
- Probing the representations of Vision Transformers.☆336Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆403Updated 2 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,780Updated 3 weeks ago
- Official implementation of VQ-Diffusion☆970Updated last year
- This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".☆1,302Updated last year
- Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch☆919Updated last year