[DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"
☆113Dec 15, 2025Updated 4 months ago
Alternatives and similar repositories for HybriMoE
Users that are interested in HybriMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code release for AdapMoE accepted by ICCAD 2024☆38Apr 28, 2025Updated last year
- ☆18Jan 27, 2025Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆302Apr 17, 2026Updated 2 weeks ago
- ☆39Nov 28, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆20Sep 28, 2024Updated last year
- ☆29Feb 3, 2026Updated 3 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆265Nov 18, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- ☆21Jun 1, 2025Updated 11 months ago
- 编译理论课作业(正则表达式与有穷自动机)辅助工具