Efficient Mixture of Experts for LLM Paper List
☆170Sep 28, 2025Updated 6 months ago
Alternatives and similar repositories for Awesome-Efficient-MoE
Users that are interested in Awesome-Efficient-MoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated last week
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Oct 21, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [ICLR 2025, IEEE TPAMI 2026] Mixture Compressor & MC#☆69Feb 12, 2025Updated last year
- ☆32Jul 2, 2025Updated 8 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆69Jul 30, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 7 months ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- Bag of Design Choices for Inference of High-Resolution Masked Generative Transformer☆16Nov 21, 2024Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆140Jun 12, 2024Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 4 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 4 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Github repo for ICLR-2025 paper, Fine-tuning Large Language Models with Sparse Matrices☆25Feb 2, 2026Updated last month
- langgraph的deepagent源码分析☆15Jan 1, 2026Updated 2 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- Awesome list for LLM quantization☆405Oct 11, 2025Updated 5 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- ☆23Nov 26, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆274Jul 6, 2025Updated 8 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Ongoing research training transformer models at scale☆18Updated this week
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆30Jun 30, 2025Updated 9 months ago
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆17Oct 25, 2024Updated last year
- ☆84Apr 18, 2025Updated 11 months ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Jul 12, 2024Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- Code for paper: Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inference☆13Jun 7, 2025Updated 9 months ago
- ☆191Jan 14, 2025Updated last year
- ☆20Sep 28, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆41May 1, 2025Updated 10 months ago
- Awesome list for LLM pruning.☆289Oct 11, 2025Updated 5 months ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆82Dec 18, 2025Updated 3 months ago
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 4 months ago
- ☆26Mar 14, 2026Updated 2 weeks ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,082Updated this week
- ☆33Oct 13, 2025Updated 5 months ago