infinigence / Semi-PDLinks
A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
☆123Updated last month
Alternatives and similar repositories for Semi-PD
Users that are interested in Semi-PD are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆219Updated 2 weeks ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated last week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆158Updated 4 months ago
- ☆93Updated 10 months ago
- High performance Transformer implementation in C++.☆148Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆91Updated 2 weeks ago
- DeepSeek-V3/R1 inference performance simulator☆176Updated 10 months ago
- Stateful LLM Serving☆95Updated 10 months ago
- ☆105Updated last year
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆73Updated 8 months ago
- ☆84Updated 3 months ago
- ☆342Updated this week
- FlagCX is a scalable and adaptive cross-chip communication library.☆170Updated this week
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- ☆130Updated last year
- LLM training technologies developed by kwai☆70Updated last week
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆71Updated 4 months ago
- ☆47Updated last year
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆73Updated 5 months ago
- Tile-based language built for AI computation across all scales☆119Updated this week
- ☆152Updated last year
- Offline optimization of your disaggregated Dynamo graph☆177Updated this week
- ☆112Updated 8 months ago
- Fast and memory-efficient exact attention☆111Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148Updated 8 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆105Updated this week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆67Updated last year
- High Performance LLM Inference Operator Library☆603Updated last week