facebookresearch / multimodal
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
β1,519Updated this week
Alternatives and similar repositories for multimodal:
Users that are interested in multimodal are comparing it to the libraries listed below
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,227Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,092Updated last year
- Code for ALBEF: a new vision-language pre-training methodβ1,598Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,516Updated 3 weeks ago
- Robust fine-tuning of zero-shot modelsβ667Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,163Updated 6 months ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,427Updated 9 months ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β814Updated last year
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,005Updated 3 months ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,381Updated 5 months ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,448Updated 8 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β995Updated 7 months ago
- Grounded Language-Image Pre-trainingβ2,297Updated 11 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β676Updated 2 years ago
- CLIP-like model evaluationβ649Updated 5 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β940Updated 10 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ642Updated 2 years ago
- Meta-Transformer for Unified Multimodal Learningβ1,561Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ704Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,237Updated 10 months ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,235Updated 3 years ago
- [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize β¦β1,827Updated 11 months ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β890Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ754Updated last year
- β596Updated this week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language modelsβ668Updated last year
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,148Updated 2 years ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).β796Updated 6 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)β1,854Updated 7 months ago
- A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. π25 knowledge distillation methods pβ¦β1,423Updated this week