A fast MoE impl for PyTorch
☆1,840Feb 10, 2025Updated last year
Alternatives and similar repositories for fastmoe
Users that are interested in fastmoe are comparing it to the libraries listed below
Sorting:
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆969Dec 21, 2025Updated 2 months ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,232Apr 19, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- ☆707Dec 6, 2025Updated 3 months ago
- ☆89Apr 2, 2022Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- Ongoing research training transformer models at scale☆15,461Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,269Dec 8, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,230Aug 14, 2025Updated 6 months ago
- Transformer related optimization, including BERT, GPT☆6,398Mar 27, 2024Updated last year
- Fast and memory-efficient exact attention☆22,460Updated this week
- A curated reading list of research in Mixture-of-Experts(MoE).☆661Oct 30, 2024Updated last year
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,303May 16, 2023Updated 2 years ago
- ATC23 AE☆46May 11, 2023Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,544Jul 18, 2025Updated 7 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,863Updated this week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- Inference framework for MoE layers based on TensorRT with Python binding☆41May 31, 2021Updated 4 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,706Feb 27, 2026Updated last week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,437Mar 20, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Feb 28, 2026Updated last week
- Training and serving large-scale neural networks with auto parallelization.☆3,184Dec 9, 2023Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆268Oct 3, 2025Updated 5 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Jul 11, 2023Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,030Jan 23, 2026Updated last month
- Large Context Attention☆769Oct 13, 2025Updated 4 months ago
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- Development repository for the Triton language and compiler☆18,501Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,895Jan 16, 2024Updated 2 years ago
- A high performance and generic framework for distributed DNN training☆3,716Oct 3, 2023Updated 2 years ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,156Jan 22, 2024Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,170Sep 30, 2025Updated 5 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- Train transformer language models with reinforcement learning.☆17,523Updated this week