google-research / vmoe
☆613Updated 2 months ago
Alternatives and similar repositories for vmoe:
Users that are interested in vmoe are comparing it to the libraries listed below
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆789Updated this week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆714Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆321Updated 9 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,029Updated 9 months ago
- A fast MoE impl for PyTorch☆1,682Updated last month
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,079Updated 11 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆271Updated 11 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆452Updated 8 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆249Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆717Updated 2 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆417Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆649Updated 2 years ago
- Implementation of "Attention Is Off By One" by Evan Miller☆190Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,560Updated this week
- Robust fine-tuning of zero-shot models☆683Updated 2 years ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆506Updated 5 months ago
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆224Updated 2 months ago
- A collection of AWESOME things about mixture-of-experts☆1,074Updated 3 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆243Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆751Updated 10 months ago
- ☆273Updated 2 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,322Updated 9 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆781Updated 8 months ago
- Rotary Transformer☆922Updated 3 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆596Updated last year
- DataComp: In search of the next generation of multimodal datasets☆688Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆523Updated 3 years ago
- Lossless Training Speed Up by Unbiased Dynamic Data Pruning☆331Updated 6 months ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,101Updated 10 months ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆188Updated 2 years ago