withinmiaov / A-Survey-on-Mixture-of-Experts-in-LLMsView external linksLinks
[TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".
☆482Jul 23, 2025Updated 6 months ago
Alternatives and similar repositories for A-Survey-on-Mixture-of-Experts-in-LLMs
Users that are interested in A-Survey-on-Mixture-of-Experts-in-LLMs are comparing it to the libraries listed below
Sorting:
- A collection of AWESOME things about mixture-of-experts☆1,262Dec 8, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆153Jul 9, 2025Updated 7 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆141Aug 21, 2024Updated last year
- Curated collection of papers in MoE model inference☆342Oct 20, 2025Updated 3 months ago
- ☆19Dec 23, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,660Mar 8, 2024Updated last year
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆34Mar 6, 2025Updated 11 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆67Jul 30, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,894Jan 16, 2024Updated 2 years ago
- Efficient Mixture of Experts for LLM Paper List☆166Sep 28, 2025Updated 4 months ago
- Code for the paper "No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations"☆12Oct 31, 2024Updated last year
- Learning to Skip the Middle Layers of Transformers☆17Aug 7, 2025Updated 6 months ago
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated 10 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆660Oct 30, 2024Updated last year
- Mixture of Attention Heads☆51Oct 10, 2022Updated 3 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆965Dec 21, 2025Updated last month
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year
- Work in progress LLM framework.☆15Oct 31, 2024Updated last year
- The official implementation of HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization☆18Mar 7, 2025Updated 11 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆14Feb 4, 2025Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆106Dec 20, 2024Updated last year
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆134Nov 23, 2024Updated last year
- Library implementation of "No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations"☆40Oct 31, 2024Updated last year
- This repository provides the official implementation of QSVD, a method for efficient low-rank approximation that unifies Query-Key-Value …☆24Dec 1, 2025Updated 2 months ago
- "Visual Prompt Selection for In-Context Learning Segmentation Framework"☆14Dec 13, 2024Updated last year
- DPO, but faster 🚀☆47Dec 6, 2024Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆60Feb 7, 2025Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆42Oct 28, 2025Updated 3 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,302Jul 15, 2025Updated 7 months ago
- [ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts☆266Oct 16, 2024Updated last year
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆477Jan 17, 2025Updated last year
- Pytorch implementation for Hypformer: Exploring Efficient Hyperbolic Transformer Fully in Hyperbolic Space (KDD 2024)☆36Aug 17, 2025Updated 6 months ago
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆29Jun 30, 2025Updated 7 months ago
- [TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆737Oct 20, 2025Updated 3 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Dec 3, 2024Updated last year
- The first spoken long-text dataset derived from live streams, designed to reflect the redundancy-rich and conversational nature of real-w…☆12Jun 28, 2025Updated 7 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆400Apr 29, 2024Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Sep 29, 2024Updated last year