FreddeFrallan / Multilingual-CLIP
OpenAI CLIP text encoders for multiple languages!
☆795Updated last year
Alternatives and similar repositories for Multilingual-CLIP:
Users that are interested in Multilingual-CLIP are comparing it to the libraries listed below
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆690Updated 3 years ago
- CLIP-like model evaluation☆696Updated 3 weeks ago
- Automatically create Faiss knn indices with the most optimal similarity search parameters.☆850Updated 11 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,238Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆766Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆707Updated last year
- Robust fine-tuning of zero-shot models☆696Updated 2 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆565Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆651Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,131Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆388Updated 2 years ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆256Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,191Updated 9 months ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,544Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆846Updated last year
- Simple image captioning model☆1,362Updated 10 months ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- Simple implementation of OpenAI CLIP model in PyTorch.☆680Updated last year
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆221Updated 3 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆218Updated 10 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆935Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,498Updated last year
- DataComp: In search of the next generation of multimodal datasets☆700Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆166Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆396Updated last year
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆387Updated last year
- ☆997Updated 2 years ago
- Large-scale text-video dataset. 10 million captioned short videos.☆629Updated 8 months ago
- Code for ALBEF: a new vision-language pre-training method☆1,638Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆242Updated 2 years ago