lucidrains / flamingo-pytorchLinks
Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
β1,248Updated 2 years ago
Alternatives and similar repositories for flamingo-pytorch
Users that are interested in flamingo-pytorch are comparing it to the libraries listed below
Sorting:
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ713Updated last year
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β482Updated last year
- DataComp: In search of the next generation of multimodal datasetsβ719Updated last month
- CLIP-like model evaluationβ726Updated last week
- Robust fine-tuning of zero-shot modelsβ717Updated 3 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β696Updated 3 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,150Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ568Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,616Updated this week
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,204Updated 11 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ657Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ769Updated 2 years ago
- An open-source framework for training large multimodal models.β3,960Updated 9 months ago
- Easily compute clip embeddings and build a clip retrieval system with themβ2,574Updated last year
- Code for ALBEF: a new vision-language pre-training methodβ1,667Updated 2 years ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,318Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,064Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)β730Updated 9 months ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β932Updated 3 months ago
- Grounded Language-Image Pre-trainingβ2,433Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,504Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β914Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β394Updated 2 years ago
- Code release for "Learning Video Representations from Large Language Models"β524Updated last year
- OpenAI CLIP text encoders for multiple languages!β802Updated 2 years ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,460Updated 3 months ago
- Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorchβ896Updated last year
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.β4,070Updated 10 months ago
- Simple implementation of OpenAI CLIP model in PyTorch.β686Updated last year
- Multimodal-GPTβ1,502Updated 2 years ago