A curated reading list of research in Mixture-of-Experts(MoE).
☆662Oct 30, 2024Updated last year
Alternatives and similar repositories for Awesome-Mixture-of-Experts-Papers
Users that are interested in Awesome-Mixture-of-Experts-Papers are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A collection of AWESOME things about mixture-of-experts☆1,272Dec 8, 2024Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,240Apr 19, 2024Updated last year
- A fast MoE impl for PyTorch☆1,845Feb 10, 2025Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆981Updated this week
- ☆713Dec 6, 2025Updated 3 months ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆848Sep 13, 2023Updated 2 years ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆140Aug 21, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,672Mar 8, 2024Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆344Apr 2, 2025Updated 11 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆15,827Updated this week
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆379Jun 17, 2024Updated last year
- ☆89Apr 2, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Distributional Generalization in NLP. A roadmap.☆87Dec 12, 2022Updated 3 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆83Oct 5, 2023Updated 2 years ago
- Awesome papers on Language-Model-as-a-Service (LMaaS)☆545May 14, 2024Updated last year
- Paper List for In-context Learning 🌷☆873Oct 8, 2024Updated last year
- ☆98Jun 6, 2022Updated 3 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- Latest Advances on Multimodal Large Language Models☆17,534Mar 20, 2026Updated last week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- ☆20Oct 31, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆484Jul 23, 2025Updated 8 months ago
- Fast and memory-efficient exact attention☆22,938Mar 23, 2026Updated last week
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆159Jan 1, 2025Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,246Updated this week
- Codebase for Merging Language Models (ICML 2024)☆863May 5, 2024Updated last year
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,082Updated this week
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆48Oct 21, 2022Updated 3 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,925Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,236Aug 14, 2025Updated 7 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,770Aug 4, 2024Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆335Dec 13, 2025Updated 3 months ago
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- Must-read papers on prompt-based tuning for pre-trained language models.☆4,298Jul 17, 2023Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Mar 24, 2026Updated last week
- ☆145Jul 21, 2024Updated last year