infinigence / Semi-PDLinks
A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
☆119Updated 7 months ago
Alternatives and similar repositories for Semi-PD
Users that are interested in Semi-PD are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆200Updated 2 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆147Updated 3 months ago
- High performance Transformer implementation in C++.☆146Updated 11 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆84Updated last week
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- DeepSeek-V3/R1 inference performance simulator☆174Updated 8 months ago
- Stateful LLM Serving☆90Updated 9 months ago
- ☆92Updated 8 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- ☆103Updated last year
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- ☆137Updated this week
- ☆79Updated 2 months ago
- ☆332Updated this week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆66Updated last year
- ☆47Updated last year
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆75Updated this week
- Fast and memory-efficient exact attention☆105Updated last week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆285Updated 4 months ago
- Tile-based language built for AI computation across all scales☆106Updated this week
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆72Updated 7 months ago
- LLM training technologies developed by kwai☆67Updated last month
- Offline optimization of your disaggregated Dynamo graph☆135Updated this week
- ☆152Updated 11 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆68Updated 4 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆65Updated 3 months ago
- ☆126Updated last year
- ☆112Updated 7 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆51Updated this week
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆93Updated this week