OFA-Sys / OFALinks
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
β2,550Updated last year
Alternatives and similar repositories for OFA
Users that are interested in OFA are comparing it to the libraries listed below
Sorting:
- Code for ALBEF: a new vision-language pre-training methodβ1,746Updated 3 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,275Updated 3 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generationβ5,628Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,191Updated 2 years ago
- Grounded Language-Image Pre-trainingβ2,560Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,516Updated last year
- EVA Series: Visual Representation Fantasies from BAAIβ2,635Updated last year
- An open-source framework for training large multimodal models.β4,056Updated last year
- Simple image captioning modelβ1,407Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,231Updated last year
- Easily compute clip embeddings and build a clip retrieval system with themβ2,714Updated 4 months ago
- Multimodal-GPTβ1,516Updated 2 years ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Familyβ2,541Updated 9 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,064Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β720Updated 3 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,110Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ578Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)β2,152Updated last year
- Emu Series: Generative Multimodal Models from BAAIβ1,762Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,686Updated this week
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"β1,017Updated last year
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.β4,337Updated 2 months ago
- Robust fine-tuning of zero-shot modelsβ757Updated 3 years ago
- OpenAI CLIP text encoders for multiple languages!β824Updated 2 years ago
- β800Updated last year
- Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIPβ480Updated 3 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,157Updated 3 years ago
- β1,043Updated 3 years ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"β1,464Updated 2 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)β487Updated 3 years ago