[TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".
☆489Jul 23, 2025Updated 8 months ago
Alternatives and similar repositories for A-Survey-on-Mixture-of-Experts-in-LLMs
Users that are interested in A-Survey-on-Mixture-of-Experts-in-LLMs are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A collection of AWESOME things about mixture-of-experts☆1,274Dec 8, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆159Jul 9, 2025Updated 9 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆143Aug 21, 2024Updated last year
- Curated collection of papers in MoE model inference☆371Mar 12, 2026Updated last month
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆71Jul 30, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,675Mar 8, 2024Updated 2 years ago
- Mixture of Attention Heads☆52Oct 10, 2022Updated 3 years ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- ☆22Sep 16, 2025Updated 7 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- Efficient Mixture of Experts for LLM Paper List☆176Sep 28, 2025Updated 6 months ago
- Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model☆13Feb 11, 2025Updated last year
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,909Jan 16, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)☆60Oct 6, 2025Updated 6 months ago
- ☆177Jul 22, 2024Updated last year
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆37Mar 6, 2025Updated last year
- Code for the paper "No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations"☆12Oct 31, 2024Updated last year
- Implementation of AAAI 2022 Paper: Go wider instead of deeper☆32Oct 27, 2022Updated 3 years ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆403Apr 29, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆984Apr 11, 2026Updated last week
- Work in progress LLM framework.☆15Oct 31, 2024Updated last year
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,314Jul 15, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Code and updates for the ScoreRS project.☆42Sep 19, 2025Updated 7 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆264Sep 22, 2024Updated last year
- ☆13Feb 21, 2024Updated 2 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,243Apr 19, 2024Updated 2 years ago
- ☆715Dec 6, 2025Updated 4 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆663Oct 30, 2024Updated last year
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆30Jun 30, 2025Updated 9 months ago
- For Certified Robustness to Text Adversarial Attacks by Randomized [MASK]☆17Oct 8, 2024Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆117Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Learning to Skip the Middle Layers of Transformers☆17Aug 7, 2025Updated 8 months ago
- The official repository for the experiments included in the paper titled "Patch-level Routing in Mixture-of-Experts is Provably Sample-ef…☆14Feb 12, 2026Updated 2 months ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆135Nov 23, 2024Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Sep 29, 2024Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- [NeurIPS'23] Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling. Haotao Wang, Ziyu Jiang, Yuning Y…☆58Dec 10, 2023Updated 2 years ago