google-research / vmoeLinks
☆663Updated 3 weeks ago
Alternatives and similar repositories for vmoe
Users that are interested in vmoe are comparing it to the libraries listed below
Sorting:
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆792Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 FP8/NVFP4/MXFP4☆896Updated last week
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆480Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,083Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆359Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆313Updated 4 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆797Updated 2 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆641Updated 9 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,157Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆426Updated 2 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆743Updated 3 years ago
- A fast MoE impl for PyTorch☆1,777Updated 6 months ago
- CLIP-like model evaluation☆756Updated last week
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆665Updated 2 years ago
- A collection of AWESOME things about mixture-of-experts☆1,190Updated 8 months ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆369Updated last year
- Robust fine-tuning of zero-shot models☆731Updated 3 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,646Updated last week
- Lossless Training Speed Up by Unbiased Dynamic Data Pruning☆340Updated 11 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,165Updated last year
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper☆772Updated 2 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆626Updated 3 weeks ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆404Updated 11 months ago
- DataComp: In search of the next generation of multimodal datasets☆734Updated 3 months ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆622Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆237Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,256Updated 2 years ago