google-research / vmoe
☆606Updated last month
Alternatives and similar repositories for vmoe:
Users that are interested in vmoe are comparing it to the libraries listed below
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆692Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆306Updated 8 months ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆766Updated this week
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆705Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆260Updated 9 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆446Updated 7 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆413Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆248Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,043Updated 10 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆774Updated 7 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,012Updated 8 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆585Updated 3 months ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆397Updated 4 months ago
- Implementation of "Attention Is Off By One" by Evan Miller☆189Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆645Updated 2 years ago
- ☆271Updated 2 years ago
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆224Updated last year
- A fast MoE impl for PyTorch☆1,627Updated last week
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆358Updated last year
- Robust fine-tuning of zero-shot models☆669Updated 2 years ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper☆746Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆707Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,541Updated this week
- Official implementation of TransNormerLLM: A Faster and Better LLM☆240Updated last year
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆214Updated 3 weeks ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆185Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆738Updated 9 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆260Updated this week
- A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis☆551Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆560Updated 2 months ago