FreddeFrallan / Multilingual-CLIPLinks
OpenAI CLIP text encoders for multiple languages!
☆817Updated 2 years ago
Alternatives and similar repositories for Multilingual-CLIP
Users that are interested in Multilingual-CLIP are comparing it to the libraries listed below
Sorting:
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆715Updated 3 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆575Updated last year
- WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique imag…☆1,085Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,266Updated 3 years ago
- CLIP-like model evaluation☆785Updated this week
- Robust fine-tuning of zero-shot models☆748Updated 3 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆718Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,180Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆405Updated 3 months ago
- Simple image captioning model☆1,396Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆782Updated 2 years ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆483Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 5 months ago
- Automatically create Faiss knn indices with the most optimal similarity search parameters.☆875Updated last week
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆669Updated 3 years ago
- Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP☆478Updated 3 years ago
- DataComp: In search of the next generation of multimodal datasets☆745Updated 6 months ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆873Updated 2 years ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,680Updated 2 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,000Updated last year
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Updated 2 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆484Updated 2 years ago
- Simple implementation of OpenAI CLIP model in PyTorch.☆714Updated 3 weeks ago
- Oscar and VinVL☆1,050Updated 2 years ago
- Large-scale text-video dataset. 10 million captioned short videos.☆662Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,539Updated last year
- Research code for pixel-based encoders of language (PIXEL)☆344Updated 3 months ago
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆553Updated 4 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,664Updated last week
- Language Models Can See: Plugging Visual Controls in Text Generation☆259Updated 3 years ago