sgl-project / sgl-kernel-npuLinks
SGLang kernel library for NPU
☆73Updated this week
Alternatives and similar repositories for sgl-kernel-npu
Users that are interested in sgl-kernel-npu are comparing it to the libraries listed below
Sorting:
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆483Updated this week
- ☆152Updated 8 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆115Updated 6 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆407Updated last week
- ☆316Updated last week
- Ascend TileLang adapter☆146Updated this week
- ☆126Updated this week
- DeepSeek-V3/R1 inference performance simulator☆168Updated 7 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆66Updated last year
- A lightweight design for computation-communication overlap.☆187Updated last month
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆64Updated 3 months ago
- Allow torch tensor memory to be released and resumed later☆167Updated last week
- PyTorch distributed training acceleration framework☆53Updated 3 months ago
- High performance Transformer implementation in C++.☆142Updated 10 months ago
- ☆102Updated last year
- Materials for learning SGLang☆650Updated this week
- ☆130Updated 10 months ago
- Efficient and easy multi-instance LLM serving☆510Updated 2 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆90Updated 7 months ago
- ☆111Updated 6 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- Fast and memory-efficient exact attention☆99Updated this week
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆82Updated last week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆288Updated 5 months ago
- ☆97Updated 7 months ago
- Examples of CUDA implementations by Cutlass CuTe☆249Updated 4 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆268Updated 3 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆89Updated 5 months ago
- A simple calculation for LLM MFU.☆50Updated 2 months ago