infinigence / Semi-PDLinks
A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
☆87Updated last month
Alternatives and similar repositories for Semi-PD
Users that are interested in Semi-PD are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆141Updated last week
- PyTorch distributed training acceleration framework☆49Updated 4 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆79Updated last month
- High performance Transformer implementation in C++.☆125Updated 5 months ago
- ☆73Updated 2 months ago
- DeepSeek-V3/R1 inference performance simulator☆148Updated 2 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆61Updated last year
- ☆77Updated last month
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆59Updated last year
- ☆66Updated last week
- ☆86Updated 2 months ago
- ☆96Updated 9 months ago
- ☆37Updated 6 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- ☆148Updated 5 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 3 weeks ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆45Updated 3 months ago
- Summary of some awesome work for optimizing LLM inference☆76Updated 2 weeks ago
- Stateful LLM Serving☆73Updated 3 months ago
- DeeperGEMM: crazy optimized version☆69Updated last month
- ☆103Updated 7 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 7 months ago
- ☆91Updated 5 months ago
- ☆60Updated last month
- ☆39Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆42Updated last month
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆53Updated last week
- ☆62Updated last year
- KV cache store for distributed LLM inference☆269Updated 2 weeks ago