Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".
☆19Oct 30, 2024Updated last year
Alternatives and similar repositories for moe_inference
Users that are interested in moe_inference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Explore Inter-layer Expert Affinity in MoE Model Inference☆16May 6, 2024Updated last year
- ☆10Jul 8, 2023Updated 2 years ago
- ☆29May 24, 2024Updated last year
- Battleship environment for reinforcement learning tasks☆14Apr 29, 2023Updated 2 years ago
- Official implementation for "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training" (AAAI 2025)☆19Jul 1, 2025Updated 8 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- KDSS is the framework for knowledge distillation from LLMs☆12Nov 5, 2025Updated 4 months ago
- ☆35Nov 28, 2024Updated last year
- Code repo for efficient quantized MoE inference with mixture of low-rank compensators☆35Apr 14, 2025Updated 11 months ago
- ☆12Jun 29, 2024Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 4 months ago
- Incremental Mobile User Profiling: Reinforcement Learning with Spatial Knowledge Graph for Modeling Event Streams☆15Jul 25, 2024Updated last year
- Official implementation of "Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent".☆22May 23, 2025Updated 10 months ago
- Efficient 2:4 sparse training algorithms and implementations☆59Dec 8, 2024Updated last year
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 5 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆35Nov 28, 2025Updated 4 months ago
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆22Apr 13, 2024Updated last year
- ☆13Oct 13, 2025Updated 5 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Sep 25, 2024Updated last year
- Visualize expert firing frequencies across sentences in the Mixtral MoE model☆18Dec 22, 2023Updated 2 years ago
- ☆12Apr 27, 2024Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- SGLang is a fast serving framework for large language models and vision language models.☆30Updated this week
- [ICLR 2025] Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron☆30Apr 30, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- AI Infra学习笔记,完整高清大图;学习路线推荐☆115Feb 27, 2026Updated last month
- ☆27Aug 5, 2024Updated last year
- ☆11Sep 4, 2022Updated 3 years ago
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆35Dec 12, 2023Updated 2 years ago
- ☆40Nov 22, 2025Updated 4 months ago
- A family of efficient edge language models in 100M~1B sizes.☆19Feb 14, 2025Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆19Jul 21, 2024Updated last year
- 监控六个主流数字货币交易所的上币公告:Gate Bybit Bitget KuCoin Binance OKX☆37Aug 6, 2025Updated 7 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A fork of minimalist-web-notepad with a little modification.☆11Dec 22, 2024Updated last year
- a fast and customizable CUDA int4 tensor core gemm☆15Aug 2, 2024Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆31Jun 7, 2024Updated last year
- ☆11Dec 8, 2023Updated 2 years ago
- Finetune Google's pre-trained ViT models from HuggingFace's model hub.☆19Apr 4, 2021Updated 4 years ago