XueFuzhao / awesome-mixture-of-expertsView external linksLinks
A collection of AWESOME things about mixture-of-experts
☆1,262Dec 8, 2024Updated last year
Alternatives and similar repositories for awesome-mixture-of-experts
Users that are interested in awesome-mixture-of-experts are comparing it to the libraries listed below
Sorting:
- A curated reading list of research in Mixture-of-Experts(MoE).☆660Oct 30, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,660Mar 8, 2024Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,228Apr 19, 2024Updated last year
- A fast MoE impl for PyTorch☆1,834Feb 10, 2025Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆848Sep 13, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆965Dec 21, 2025Updated last month
- ☆705Dec 6, 2025Updated 2 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆482Jul 23, 2025Updated 6 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆344Apr 2, 2025Updated 10 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,302Jul 15, 2025Updated 7 months ago
- ☆273Oct 31, 2023Updated 2 years ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆8,989Feb 6, 2026Updated last week
- Latest Advances on Multimodal Large Language Models☆17,337Feb 7, 2026Updated last week
- PyTorch implementation of LIMoE☆52Apr 1, 2024Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,768Aug 4, 2024Updated last year
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,894Jan 16, 2024Updated 2 years ago
- OLMoE: Open Mixture-of-Experts Language Models☆967Sep 23, 2025Updated 4 months ago
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,910Jan 22, 2026Updated 3 weeks ago
- Fast and memory-efficient exact attention☆22,231Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 2 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆82Oct 5, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆15,213Updated this week
- ☆89Apr 2, 2022Updated 3 years ago
- Awesome LLM compression research papers and tools.☆1,776Nov 10, 2025Updated 3 months ago
- A framework for few-shot evaluation of language models.☆11,393Feb 11, 2026Updated last week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,229Aug 14, 2025Updated 6 months ago
- AllenAI's post-training codebase☆3,573Feb 11, 2026Updated last week
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆103Jun 20, 2025Updated 7 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆4,990Jan 18, 2026Updated last month
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,296Dec 9, 2025Updated 2 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,706Jun 25, 2024Updated last year
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,246Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Feb 9, 2026Updated last week
- Mamba SSM architecture☆17,186Jan 12, 2026Updated last month