☆707Dec 6, 2025Updated 2 months ago
Alternatives and similar repositories for vmoe
Users that are interested in vmoe are comparing it to the libraries listed below
Sorting:
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆969Dec 21, 2025Updated 2 months ago
- A fast MoE impl for PyTorch☆1,840Feb 10, 2025Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,231Apr 19, 2024Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- A collection of AWESOME things about mixture-of-experts☆1,266Dec 8, 2024Updated last year
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆136Nov 30, 2022Updated 3 years ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆661Oct 30, 2024Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- Official DeiT repository☆4,325Mar 15, 2024Updated last year
- ☆285Aug 14, 2025Updated 6 months ago
- [ICCV 2023] You Only Look at One Partial Sequence☆343Oct 21, 2023Updated 2 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆765Apr 14, 2022Updated 3 years ago
- PyTorch implementation of MAE https//arxiv.org/abs/2111.06377☆8,230Jul 23, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,371May 19, 2025Updated 9 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆407Nov 10, 2023Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining☆97Nov 2, 2022Updated 3 years ago
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆82Oct 5, 2023Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆772Apr 28, 2025Updated 10 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,171Jun 17, 2024Updated last year
- ☆12,318Jan 30, 2026Updated last month
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.☆182Apr 17, 2022Updated 3 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Feb 21, 2022Updated 4 years ago
- ☆2,946Jan 15, 2026Updated last month
- Official code Cross-Covariance Image Transformer (XCiT)☆674Sep 28, 2021Updated 4 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,554Apr 24, 2024Updated last year
- Code release for ConvNeXt model☆6,300Jan 8, 2023Updated 3 years ago
- PyTorch implementation of LIMoE☆52Apr 1, 2024Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,681Aug 5, 2024Updated last year
- An open source implementation of CLIP.☆13,430Updated this week
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆415Jul 14, 2025Updated 7 months ago
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated last year
- ☆222Feb 21, 2023Updated 3 years ago
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆15,721Jul 24, 2024Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆615Dec 13, 2022Updated 3 years ago