OFA-Sys / OFALinks
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
☆2,506Updated last year
Alternatives and similar repositories for OFA
Users that are interested in OFA are comparing it to the libraries listed below
Sorting:
- Code for ALBEF: a new vision-language pre-training method☆1,675Updated 2 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,376Updated 11 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,156Updated last year
- Grounded Language-Image Pre-training☆2,461Updated last year
- Simple image captioning model☆1,383Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,249Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,475Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,535Updated 11 months ago
- An open-source framework for training large multimodal models.☆3,976Updated 10 months ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,595Updated last year
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,084Updated 11 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,737Updated 8 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆702Updated 3 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,625Updated last week
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,208Updated last year
- Multimodal-GPT☆1,506Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,010Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆968Updated last year
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,493Updated 3 months ago
- OpenAI CLIP text encoders for multiple languages!☆805Updated 2 years ago
- Robust fine-tuning of zero-shot models☆722Updated 3 years ago
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,576Updated 7 months ago
- CLIP-like model evaluation☆738Updated last month
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,317Updated last year
- An open source implementation of CLIP.☆12,176Updated last month
- Emu Series: Generative Multimodal Models from BAAI☆1,734Updated 9 months ago
- ☆1,704Updated 9 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model To…☆1,043Updated 9 months ago
- [Image 2 Text Para] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆813Updated 2 years ago