[DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"
☆107Dec 15, 2025Updated 3 months ago
Alternatives and similar repositories for HybriMoE
Users that are interested in HybriMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code release for AdapMoE accepted by ICCAD 2024☆38Apr 28, 2025Updated 11 months ago
- ☆18Jan 27, 2025Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆293Updated this week
- ☆39Nov 28, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆20Sep 28, 2024Updated last year
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆263Nov 18, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- ☆22Jun 1, 2025Updated 10 months ago
- 编译理论课作业(正则表达式与有穷自动机)辅助工具☆14Dec 7, 2022Updated 3 years ago
- ☆13Nov 1, 2021Updated 4 years ago
- [ISCA'25] LIA: A Single-GPU LLM Inference Acceleration with Cooperative AMX-Enabled CPU-GPU Computation and CXL Offloading☆12Jun 28, 2025Updated 9 months ago
- Asynchronous pipeline parallel optimization☆20Feb 2, 2026Updated 2 months ago
- ☆15Jun 26, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆85Dec 18, 2025Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- ☆132Nov 11, 2024Updated last year
- InstAttention: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference☆16Mar 30, 2025Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆257Feb 13, 2026Updated 2 months ago
- Reimplementation of some fundamental sampling-based arm planning algorithms☆12Dec 30, 2022Updated 3 years ago
- ☆17Feb 3, 2023Updated 3 years ago
- A low-latency & high-throughput serving engine for LLMs☆490Jan 8, 2026Updated 3 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆633Sep 11, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆12Aug 18, 2023Updated 2 years ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆31Mar 28, 2025Updated last year
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆45Feb 27, 2025Updated last year
- ☆14Jun 4, 2024Updated last year
- A Triton-only attention backend for vLLM☆25Mar 17, 2026Updated 3 weeks ago
- ☆17Apr 9, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- KV cache store for distributed LLM inference☆402Nov 13, 2025Updated 5 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- Curated collection of papers in MoE model inference☆370Mar 12, 2026Updated last month
- ☆156Mar 4, 2025Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- ☆11May 19, 2025Updated 10 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year