lucidrains / flamingo-pytorch
Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
β1,235Updated 2 years ago
Alternatives and similar repositories for flamingo-pytorch:
Users that are interested in flamingo-pytorch are comparing it to the libraries listed below
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ708Updated last year
- CLIP-like model evaluationβ680Updated last month
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,561Updated last week
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,118Updated last year
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β479Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ649Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasetsβ688Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,371Updated last week
- Robust fine-tuning of zero-shot modelsβ681Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β683Updated 2 years ago
- Official code for VisProg (CVPR 2023 Best Paper!)β712Updated 6 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ559Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,183Updated 8 months ago
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ762Updated 2 years ago
- Multimodal-GPTβ1,495Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β921Updated this week
- An open-source framework for training large multimodal models.β3,857Updated 6 months ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β902Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,487Updated 11 months ago
- Code for ALBEF: a new vision-language pre-training methodβ1,622Updated 2 years ago
- β995Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,025Updated 9 months ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)β717Updated 2 years ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"β518Updated last year
- OpenAI CLIP text encoders for multiple languages!β786Updated last year
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,021Updated 5 months ago
- β772Updated 8 months ago
- Recent Advances in Vision and Language Pre-training (VLP)β293Updated last year
- Multi-modality pre-trainingβ487Updated 10 months ago
- Code release for "Learning Video Representations from Large Language Models"β512Updated last year