☆91Apr 2, 2022Updated 4 years ago
Alternatives and similar repositories for FasterMoE
Users that are interested in FasterMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Inference framework for MoE layers based on TensorRT with Python binding☆41May 31, 2021Updated 4 years ago
- ATC23 AE☆45May 11, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆988Apr 11, 2026Updated 3 weeks ago
- ☆19Sep 15, 2022Updated 3 years ago
- ☆10Apr 29, 2023Updated 3 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Curated collection of papers in machine learning systems☆546Feb 7, 2026Updated 2 months ago
- Lucid: A Non-Intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs☆58May 21, 2023Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆48Oct 21, 2022Updated 3 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- ☆18Jan 27, 2025Updated last year
- ☆14Nov 7, 2025Updated 5 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆335Dec 13, 2025Updated 4 months ago
- ☆69Mar 14, 2023Updated 3 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 3 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Mar 7, 2024Updated 2 years ago
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆56Mar 5, 2025Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,275Dec 8, 2024Updated last year
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- ☆22Apr 22, 2024Updated 2 years ago
- Visualize expert firing frequencies across sentences in the Mixtral MoE model☆18Dec 22, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Course materials for 11-767☆13Nov 10, 2022Updated 3 years ago
- HeliosArtifact☆22Sep 27, 2022Updated 3 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆43Dec 29, 2023Updated 2 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Nov 1, 2021Updated 4 years ago
- ☆717Dec 6, 2025Updated 5 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- The official repository for the experiments included in the paper titled "Patch-level Routing in Mixture-of-Experts is Provably Sample-ef…☆14Feb 12, 2026Updated 2 months ago
- Primo: Practical Learning-Augmented Systems with Interpretable Models☆19Dec 26, 2023Updated 2 years ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆10Jun 4, 2021Updated 4 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,247Aug 14, 2025Updated 8 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆56May 10, 2024Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Dec 4, 2023Updated 2 years ago
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Nov 14, 2021Updated 4 years ago
- ☆78May 4, 2021Updated 5 years ago