rmokady / CLIP_prefix_captionLinks
Simple image captioning model
☆1,393Updated last year
Alternatives and similar repositories for CLIP_prefix_caption
Users that are interested in CLIP_prefix_caption are comparing it to the libraries listed below
Sorting:
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆712Updated 3 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆574Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,536Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,216Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,178Updated last year
- Simple implementation of OpenAI CLIP model in PyTorch.☆705Updated last year
- OpenAI CLIP text encoders for multiple languages!☆812Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,265Updated 2 years ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,653Updated last month
- Grounded Language-Image Pre-training☆2,507Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆716Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,498Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,713Updated 3 years ago
- Robust fine-tuning of zero-shot models☆742Updated 3 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆666Updated 3 years ago
- ☆1,029Updated 3 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆871Updated 2 years ago
- Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP☆476Updated 3 years ago
- CLIP-like model evaluation☆773Updated last month
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆782Updated 2 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,514Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆987Updated last year
- This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".☆1,283Updated last year
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆543Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,079Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆793Updated last year
- Oscar and VinVL☆1,049Updated 2 years ago
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆847Updated 4 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆415Updated 2 years ago