OFA-Sys / OFALinks
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
☆2,502Updated last year
Alternatives and similar repositories for OFA
Users that are interested in OFA are comparing it to the libraries listed below
Sorting:
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,321Updated 10 months ago
- Grounded Language-Image Pre-training☆2,425Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,665Updated 2 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,506Updated 10 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,636Updated 7 months ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,471Updated last year
- Simple image captioning model☆1,376Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,147Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,246Updated 2 years ago
- Multimodal-GPT☆1,501Updated 2 years ago
- Easily compute clip embeddings and build a clip retrieval system with them☆2,574Updated last year
- An open-source framework for training large multimodal models.☆3,952Updated 9 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,727Updated 8 months ago
- An open source implementation of CLIP.☆11,957Updated last week
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,204Updated 11 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,483Updated 2 months ago
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,745Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,613Updated this week
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,572Updated 6 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,978Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆958Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆567Updated last year
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,423Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆696Updated 3 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,942Updated last month
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,677Updated 2 months ago
- Open-source and strong foundation image recognition models.☆3,284Updated 4 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,620Updated 10 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,882Updated last year
- CLIP-like model evaluation☆725Updated this week