facebookresearch / multimodalLinks
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
β1,619Updated last week
Alternatives and similar repositories for multimodal
Users that are interested in multimodal are comparing it to the libraries listed below
Sorting:
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,249Updated 2 years ago
- Code for ALBEF: a new vision-language pre-training methodβ1,673Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,155Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,988Updated last month
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,208Updated last year
- Robust fine-tuning of zero-shot modelsβ720Updated 3 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,504Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,316Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β855Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,474Updated last year
- Meta-Transformer for Unified Multimodal Learningβ1,613Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,470Updated 3 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β701Updated 3 years ago
- Grounded Language-Image Pre-trainingβ2,447Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ713Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language modelsβ776Updated last year
- CLIP-like model evaluationβ737Updated 3 weeks ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,071Updated last year
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,042Updated 9 months ago
- DataComp: In search of the next generation of multimodal datasetsβ722Updated 2 months ago
- Scenic: A Jax Library for Computer Vision Research and Beyondβ3,587Updated 2 weeks ago
- [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize β¦β1,908Updated last year
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,152Updated 2 years ago
- Simple implementation of OpenAI CLIP model in PyTorch.β688Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorchβ1,156Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,276Updated 3 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538β1,134Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)β736Updated 3 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β917Updated last year
- Foundation Architecture for (M)LLMsβ3,089Updated last year