OFA-Sys / OFA
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
☆2,421Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for OFA
- Code for ALBEF: a new vision-language pre-training method☆1,565Updated 2 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆4,823Updated 3 months ago
- Grounded Language-Image Pre-training☆2,226Updated 9 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,136Updated 4 months ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,407Updated 7 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,215Updated 2 years ago
- An open-source framework for training large multimodal models.☆3,750Updated 2 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,067Updated 11 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,307Updated 3 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,777Updated 6 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,474Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆9,943Updated this week
- Simple image captioning model☆1,317Updated 5 months ago
- An open source implementation of CLIP.☆10,344Updated last week
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆3,723Updated 3 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model To…☆973Updated last month
- GIT: A Generative Image-to-text Transformer for Vision and Language☆549Updated 11 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆881Updated 7 months ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,413Updated 7 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆693Updated last year
- Robust fine-tuning of zero-shot models☆649Updated 2 years ago
- Scenic: A Jax Library for Computer Vision Research and Beyond☆3,334Updated last month
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,374Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆449Updated last year
- CLIP-like model evaluation☆615Updated 3 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆665Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆801Updated last year
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,140Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆636Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,339Updated 2 months ago