facebookresearch / multimodal
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
β1,589Updated last week
Alternatives and similar repositories for multimodal:
Users that are interested in multimodal are comparing it to the libraries listed below
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,240Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,131Updated last year
- Code for ALBEF: a new vision-language pre-training methodβ1,643Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,837Updated last month
- Scenic: A Jax Library for Computer Vision Research and Beyondβ3,525Updated this week
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,195Updated 10 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,285Updated last year
- Robust fine-tuning of zero-shot modelsβ697Updated 3 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β849Updated last year
- Machine learning metrics for distributed, scalable PyTorch applications.β2,258Updated this week
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ707Updated last year
- Meta-Transformer for Unified Multimodal Learningβ1,595Updated last year
- EVA Series: Visual Representation Fantasies from BAAIβ2,477Updated 9 months ago
- Grounded Language-Image Pre-trainingβ2,392Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,455Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,502Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,264Updated 3 years ago
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.β2,574Updated 2 months ago
- CLIP-like model evaluationβ703Updated last month
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,428Updated last month
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,046Updated 10 months ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,152Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ766Updated 2 years ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β977Updated last year
- A Survey on multimodal learning research.β324Updated last year
- An open-source framework for training large multimodal models.β3,904Updated 8 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,035Updated 7 months ago
- A curated list of Multimodal Related Research.β1,348Updated last year
- β629Updated last week
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538β1,100Updated last year