google-research / vmoe
☆572Updated this week
Related projects ⓘ
Alternatives and complementary repositories for vmoe
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆728Updated last week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆637Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆291Updated 4 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆245Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆242Updated 6 months ago
- Implementation of "Attention Is Off By One" by Evan Miller☆182Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆967Updated 4 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆426Updated 3 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆689Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆974Updated 6 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆759Updated 4 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆406Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆676Updated 2 years ago
- ☆267Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆229Updated 9 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆636Updated 2 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆347Updated last year
- A fast MoE impl for PyTorch☆1,560Updated 4 months ago
- CLIP-like model evaluation☆611Updated 2 months ago
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆220Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆695Updated 6 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆130Updated 6 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆284Updated 9 months ago
- Microsoft Automatic Mixed Precision Library☆522Updated last month
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,214Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,210Updated 2 years ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆533Updated last week
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆867Updated last year
- Lossless Training Speed Up by Unbiased Dynamic Data Pruning☆317Updated last month
- ☆565Updated 2 weeks ago