FreddeFrallan / Multilingual-CLIP
OpenAI CLIP text encoders for multiple languages!
β795Updated last year
Alternatives and similar repositories for Multilingual-CLIP:
Users that are interested in Multilingual-CLIP are comparing it to the libraries listed below
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β692Updated 3 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,238Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ707Updated last year
- CLIP-like model evaluationβ696Updated last month
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β388Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,129Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ766Updated 2 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ565Updated last year
- Easily compute clip embeddings and build a clip retrieval system with themβ2,546Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"β936Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ651Updated 2 years ago
- Robust fine-tuning of zero-shot modelsβ696Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasetsβ700Updated last year
- Code for ALBEF: a new vision-language pre-training methodβ1,641Updated 2 years ago
- Simple image captioning modelβ1,362Updated 10 months ago
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β482Updated last year
- Language Models Can See: Plugging Visual Controls in Text Generationβ256Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.β221Updated 3 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)β370Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,584Updated last week
- WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique imagβ¦β1,049Updated 7 months ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,498Updated last year
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)β242Updated 2 years ago
- Simple implementation of OpenAI CLIP model in PyTorch.β680Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for trainingβ167Updated 2 years ago
- Automatically create Faiss knn indices with the most optimal similarity search parameters.β851Updated 11 months ago
- Multi-modality pre-trainingβ491Updated 11 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)β478Updated 2 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess themβ219Updated 11 months ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learningβ¦β718Updated last year