pprp / Awesome-Efficient-MoEView external linksLinks
Efficient Mixture of Experts for LLM Paper List
☆166Sep 28, 2025Updated 4 months ago
Alternatives and similar repositories for Awesome-Efficient-MoE
Users that are interested in Awesome-Efficient-MoE are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Oct 21, 2024Updated last year
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 10 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆66Feb 12, 2025Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆67Jul 30, 2024Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆129Nov 26, 2025Updated 2 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- Github repo for ICLR-2025 paper, Fine-tuning Large Language Models with Sparse Matrices☆24Feb 2, 2026Updated 2 weeks ago
- ☆32Jul 2, 2025Updated 7 months ago
- Bag of Design Choices for Inference of High-Resolution Masked Generative Transformer☆16Nov 21, 2024Updated last year
- FlashInfer Bench @ MLSys 2026: Building AI agents to write high performance GPU kernels☆112Feb 9, 2026Updated last week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Feb 9, 2026Updated last week
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- ☆190Jan 14, 2025Updated last year
- ☆85Apr 18, 2025Updated 9 months ago
- ☆20Sep 28, 2024Updated last year
- ☆23Nov 26, 2024Updated last year
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆25Oct 5, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 7 months ago
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆24Oct 8, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- Awesome list for LLM quantization☆390Oct 11, 2025Updated 4 months ago
- ☆104Nov 7, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 7 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆29Jun 30, 2025Updated 7 months ago
- ☆129Jun 6, 2025Updated 8 months ago
- An extension library of WMMA API (Tensor Core API)☆109Jul 12, 2024Updated last year
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- ☆12Sep 1, 2023Updated 2 years ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆14Feb 4, 2025Updated last year