laekov / fastmoeView external linksLinks
A fast MoE impl for PyTorch
☆1,831Feb 10, 2025Updated last year
Alternatives and similar repositories for fastmoe
Users that are interested in fastmoe are comparing it to the libraries listed below
Sorting:
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆963Dec 21, 2025Updated last month
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,225Apr 19, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Mar 8, 2024Updated last year
- ☆705Dec 6, 2025Updated 2 months ago
- ☆89Apr 2, 2022Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- Ongoing research training transformer models at scale☆15,162Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,262Dec 8, 2024Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆848Sep 13, 2023Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,224Aug 14, 2025Updated 6 months ago
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- Fast and memory-efficient exact attention☆22,231Updated this week
- A curated reading list of research in Mixture-of-Experts(MoE).☆660Oct 30, 2024Updated last year
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304May 16, 2023Updated 2 years ago
- ATC23 AE☆46May 11, 2023Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Jul 18, 2025Updated 6 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Feb 7, 2026Updated last week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year
- Inference framework for MoE layers based on TensorRT with Python binding☆41May 31, 2021Updated 4 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,578Feb 7, 2026Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Feb 7, 2026Updated last week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,433Mar 20, 2024Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,180Dec 9, 2023Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Jul 11, 2023Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- Large Context Attention☆766Oct 13, 2025Updated 4 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,021Jan 23, 2026Updated 3 weeks ago
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)