lucidrains / CoCa-pytorchLinks
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
β1,178Updated last year
Alternatives and similar repositories for CoCa-pytorch
Users that are interested in CoCa-pytorch are comparing it to the libraries listed below
Sorting:
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β713Updated 3 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,266Updated 3 years ago
- Robust fine-tuning of zero-shot modelsβ744Updated 3 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,537Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,219Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ667Updated 3 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β930Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)β748Updated 3 years ago
- β1,036Updated 3 years ago
- Grounded Language-Image Pre-trainingβ2,519Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β870Updated 2 years ago
- Code for ALBEF: a new vision-language pre-training methodβ1,722Updated 3 years ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,585Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ575Updated last year
- CLIP-like model evaluationβ780Updated 2 months ago
- Simple image captioning modelβ1,394Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,656Updated last week
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,499Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.β773Updated 3 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ716Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ782Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,110Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,298Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"β792Updated last year
- OpenAI CLIP text encoders for multiple languages!β814Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"β403Updated last year
- Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIPβ477Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"β993Updated last year
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)β2,092Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)β749Updated last year