google-research / vmoeLinks
☆638Updated 2 weeks ago
Alternatives and similar repositories for vmoe
Users that are interested in vmoe are comparing it to the libraries listed below
Sorting:
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆758Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆333Updated 11 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆294Updated 2 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆467Updated 10 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆829Updated this week
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,059Updated 11 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆791Updated 11 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆249Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆727Updated 3 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,111Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆425Updated 2 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆681Updated 6 months ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,269Updated 3 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆401Updated 8 months ago
- Low rank adaptation for Vision Transformer☆409Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆768Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆514Updated 2 weeks ago
- Implementation of "Attention Is Off By One" by Evan Miller☆190Updated last year
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆227Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆294Updated 3 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆378Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆655Updated 2 years ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆590Updated 6 months ago
- Implementation of Linformer for Pytorch☆285Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆166Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,135Updated 5 months ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆831Updated 10 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆628Updated 7 months ago
- An All-MLP solution for Vision, from Google AI☆1,022Updated 8 months ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆912Updated last year