google-research / vmoeLinks
☆644Updated this week
Alternatives and similar repositories for vmoe
Users that are interested in vmoe are comparing it to the libraries listed below
Sorting:
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆842Updated this week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆766Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆341Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,120Updated last year
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆470Updated 11 months ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆731Updated 3 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,064Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆298Updated 2 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆633Updated 7 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆777Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,272Updated 3 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,613Updated last week
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆657Updated 2 years ago
- A collection of AWESOME things about mixture-of-experts☆1,143Updated 6 months ago
- Robust fine-tuning of zero-shot models☆717Updated 3 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆426Updated 2 years ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,122Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆592Updated 6 months ago
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆228Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆193Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆914Updated last year
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmen…☆475Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆719Updated last month
- CLIP-like model evaluation☆726Updated last week
- ☆277Updated 2 years ago
- A fast MoE impl for PyTorch☆1,746Updated 4 months ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper☆767Updated 2 years ago
- [ICML 2023] Official PyTorch implementation of Global Context Vision Transformers☆438Updated last year