davidmrau / mixture-of-expertsLinks
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
☆1,188Updated last year
Alternatives and similar repositories for mixture-of-experts
Users that are interested in mixture-of-experts are comparing it to the libraries listed below
Sorting:
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆821Updated 2 years ago
- A fast MoE impl for PyTorch☆1,806Updated 8 months ago
- A collection of AWESOME things about mixture-of-experts☆1,217Updated 10 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆648Updated 11 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆801Updated last year
- ☆683Updated 2 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆934Updated 3 weeks ago
- Rotary Transformer☆1,039Updated 3 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆366Updated last year
- 2024 up-to-date list of DATASETS, CODEBASES and PAPERS on Multi-Task Learning (MTL), from Machine Learning perspective.☆792Updated 2 weeks ago
- Pytorch library for fast transformer implementations☆1,745Updated 2 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,656Updated last week
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆769Updated 3 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,268Updated last year
- Collection of papers on state-space models☆601Updated last month
- An All-MLP solution for Vision, from Google AI☆1,050Updated 3 months ago
- PyTorch implementation of the InfoNCE loss for self-supervised learning.☆592Updated last year
- Structured state space sequence models☆2,750Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆767Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆541Updated 3 years ago
- A PyTorch Library for Multi-Task Learning☆2,426Updated 5 months ago
- A comprehensive list of awesome contrastive self-supervised learning papers.☆1,289Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,154Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,181Updated 2 years ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,777Updated 2 weeks ago
- An implementation of local windowed attention for language modeling☆483Updated 3 months ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,205Updated 2 years ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆802Updated 4 months ago
- Diffusion-LM☆1,192Updated last year
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆856Updated last year