facebookresearch / multimodalLinks
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
β1,691Updated this week
Alternatives and similar repositories for multimodal
Users that are interested in multimodal are comparing it to the libraries listed below
Sorting:
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,198Updated 2 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,274Updated 3 years ago
- Code for ALBEF: a new vision-language pre-training methodβ1,748Updated 3 years ago
- Robust fine-tuning of zero-shot modelsβ759Updated 3 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β3,339Updated 8 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β719Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,231Updated last year
- CLIP-like model evaluationβ800Updated 2 weeks ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β895Updated 2 years ago
- Meta-Transformer for Unified Multimodal Learningβ1,651Updated 2 years ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,801Updated 3 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,166Updated last year
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.β2,980Updated 7 months ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,520Updated last year
- π¦ Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorchβ2,182Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language modelsβ846Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ721Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,555Updated last year
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).β858Updated last year
- Scenic: A Jax Library for Computer Vision Research and Beyondβ3,759Updated 3 weeks ago
- [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize β¦β1,976Updated 2 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β939Updated 2 years ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,065Updated last year
- β702Updated last month
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,313Updated 4 years ago
- Simple implementation of OpenAI CLIP model in PyTorch.β720Updated 3 months ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,641Updated last year
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β1,052Updated last year
- Foundation Architecture for (M)LLMsβ3,133Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,353Updated last year