infinigence / Semi-PDView external linksLinks
A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
☆123Dec 25, 2025Updated last month
Alternatives and similar repositories for Semi-PD
Users that are interested in Semi-PD are comparing it to the libraries listed below
Sorting:
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆70Apr 25, 2025Updated 9 months ago
- A lightweight design for computation-communication overlap.☆219Jan 20, 2026Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- Perplexity GPU Kernels☆560Nov 7, 2025Updated 3 months ago
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458May 30, 2025Updated 8 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,247Aug 28, 2025Updated 5 months ago
- ☆97Mar 26, 2025Updated 10 months ago
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- ☆523Jan 22, 2026Updated 3 weeks ago
- DeepSeek-V3/R1 inference performance simulator☆176Mar 27, 2025Updated 10 months ago
- Open ABI and FFI for Machine Learning Systems☆337Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆497Mar 24, 2025Updated 10 months ago
- ☆105Sep 9, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- ☆52May 19, 2025Updated 8 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆462Updated this week
- Artifacts for SOSP'19 paper Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions☆21Apr 15, 2022Updated 3 years ago
- ☆159Dec 27, 2024Updated last year
- Latency and Memory Analysis of Transformer Models for Training and Inference☆478Apr 19, 2025Updated 9 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆266Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆898Updated this week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Updated this week
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆410Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆776Apr 6, 2025Updated 10 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 7 months ago
- ☆152Jan 9, 2025Updated last year
- Fast low-bit matmul kernels in Triton☆429Feb 1, 2026Updated last week
- A low-latency & high-throughput serving engine for LLMs☆470Jan 8, 2026Updated last month
- High-performance safetensors model loader☆99Jan 13, 2026Updated last month
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,701Updated this week
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆101Dec 15, 2025Updated last month
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆87Jan 29, 2026Updated 2 weeks ago