[DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"
☆105Dec 15, 2025Updated 3 months ago
Alternatives and similar repositories for HybriMoE
Users that are interested in HybriMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code release for AdapMoE accepted by ICCAD 2024☆36Apr 28, 2025Updated 10 months ago
- ☆18Jan 27, 2025Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆288Mar 16, 2026Updated last week
- ☆35Nov 28, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 18, 2026Updated last week
- ☆20Sep 28, 2024Updated last year
- ☆29Feb 3, 2026Updated last month
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆262Nov 18, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- ☆20Jun 1, 2025Updated 9 months ago
- ☆13Nov 1, 2021Updated 4 years ago
- 编译理论课作业(正则表达式与有穷自动机)辅助工具☆14Dec 7, 2022Updated 3 years ago
- [ISCA'25] LIA: A Single-GPU LLM Inference Acceleration with Cooperative AMX-Enabled CPU-GPU Computation and CXL Offloading☆13Jun 28, 2025Updated 8 months ago
- Asynchronous pipeline parallel optimization☆19Feb 2, 2026Updated last month
- ☆15Jun 26, 2024Updated last year
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆82Dec 18, 2025Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- ☆131Nov 11, 2024Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆253Feb 13, 2026Updated last month
- ☆17Feb 3, 2023Updated 3 years ago
- Reimplementation of some fundamental sampling-based arm planning algorithms☆12Dec 30, 2022Updated 3 years ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆628Sep 11, 2024Updated last year
- ☆12Aug 18, 2023Updated 2 years ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 11 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆363Nov 20, 2025Updated 4 months ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- ☆14Jun 4, 2024Updated last year
- A Triton-only attention backend for vLLM☆24Mar 17, 2026Updated last week
- ☆17Apr 9, 2025Updated 11 months ago
- KV cache store for distributed LLM inference☆399Nov 13, 2025Updated 4 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- Curated collection of papers in MoE model inference☆357Mar 12, 2026Updated last week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- ☆155Mar 4, 2025Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆278Aug 31, 2024Updated last year
- ☆11May 19, 2025Updated 10 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆124Dec 25, 2025Updated 3 months ago