facebookresearch / multimodal
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
β1,542Updated this week
Alternatives and similar repositories for multimodal:
Users that are interested in multimodal are comparing it to the libraries listed below
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,230Updated 2 years ago
- Meta-Transformer for Unified Multimodal Learningβ1,564Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,174Updated 7 months ago
- Code for ALBEF: a new vision-language pre-training methodβ1,606Updated 2 years ago
- Robust fine-tuning of zero-shot modelsβ669Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,103Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,427Updated 10 months ago
- Grounded Language-Image Pre-trainingβ2,320Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,583Updated last week
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,252Updated 11 months ago
- CLIP-like model evaluationβ664Updated this week
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,350Updated 2 months ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,149Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦