A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
☆125Dec 25, 2025Updated 3 months ago
Alternatives and similar repositories for Semi-PD
Users that are interested in Semi-PD are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆71Apr 25, 2025Updated 11 months ago
- A lightweight design for computation-communication overlap.☆226Jan 20, 2026Updated 2 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- ☆67Apr 26, 2025Updated 11 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- NVIDIA Inference Xfer Library (NIXL)☆970Updated this week
- Perplexity GPU Kernels☆565Nov 7, 2025Updated 5 months ago
- Open ABI and FFI for Machine Learning Systems☆375Updated this week
- KV cache store for distributed LLM inference☆405Nov 13, 2025Updated 5 months ago
- Efficient and easy multi-instance LLM serving☆543Mar 12, 2026Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆474May 30, 2025Updated 10 months ago
- ☆51May 19, 2025Updated 10 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,286Aug 28, 2025Updated 7 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆535Apr 1, 2026Updated 2 weeks ago
- ☆98Mar 26, 2025Updated last year
- DeepSeek-V3/R1 inference performance simulator☆193Mar 27, 2025Updated last year
- GLake: optimizing GPU memory management and IO transmission.☆501Mar 24, 2025Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,403Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆498Apr 10, 2026Updated last week
- ☆25Mar 15, 2023Updated 3 years ago
- ☆12Mar 26, 2024Updated 2 years ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,071Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆471Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,372Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆485Apr 19, 2025Updated 11 months ago
- ☆150Jan 9, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆801Apr 6, 2025Updated last year
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆97Apr 26, 2025Updated 11 months ago
- ☆105Sep 9, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆423Mar 5, 2026Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆177Feb 11, 2026Updated 2 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Fast low-bit matmul kernels in Triton☆443Apr 4, 2026Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 9 months ago
- vLLM Router☆55Mar 11, 2024Updated 2 years ago
- Artifacts for SOSP'19 paper Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions☆21Apr 15, 2022Updated 4 years ago
- High performance Transformer implementation in C++.☆154Jan 18, 2025Updated last year
- A Datacenter Scale Distributed Inference Serving Framework☆6,527Updated this week
- ☆101Apr 6, 2026Updated last week