A collection of AWESOME things about mixture-of-experts
☆1,275Dec 8, 2024Updated last year
Alternatives and similar repositories for awesome-mixture-of-experts
Users that are interested in awesome-mixture-of-experts are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A curated reading list of research in Mixture-of-Experts(MoE).☆663Oct 30, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,677Mar 8, 2024Updated 2 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,242Apr 19, 2024Updated 2 years ago
- A fast MoE impl for PyTorch☆1,849Feb 10, 2025Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆859Sep 13, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆986Apr 11, 2026Updated 2 weeks ago
- ☆716Dec 6, 2025Updated 4 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆382Jun 17, 2024Updated last year
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆490Jul 23, 2025Updated 9 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆345Apr 2, 2025Updated last year
- PyTorch implementation of LIMoE☆52Apr 1, 2024Updated 2 years ago
- ☆277Oct 31, 2023Updated 2 years ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,316Jul 15, 2025Updated 9 months ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆91Apr 2, 2022Updated 4 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆83Oct 5, 2023Updated 2 years ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,417Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆273Oct 3, 2025Updated 6 months ago
- Latest Advances on Multimodal Large Language Models☆17,705Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆1,012Sep 23, 2025Updated 7 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,921Jan 16, 2024Updated 2 years ago
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆106Jun 20, 2025Updated 10 months ago
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,971Apr 15, 2026Updated 2 weeks ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,770Aug 4, 2024Updated last year
- Fast and memory-efficient exact attention☆23,563Updated this week
- Ongoing research training transformer models at scale☆16,145Updated this week
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,247Aug 14, 2025Updated 8 months ago
- A framework for few-shot evaluation of language models.☆12,331Apr 22, 2026Updated last week
- Awesome LLM compression research papers and tools.☆1,824Feb 23, 2026Updated 2 months ago
- 🚀 Efficient implementations for emerging model architectures☆4,999Updated this week
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆160Jan 1, 2025Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Framework☆20,930Updated this week
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,185Apr 20, 2026Updated last week
- AllenAI's post-training codebase☆3,702Updated this week
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,727Jun 25, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,204Apr 18, 2026Updated last week