bytedance-iaas / sglangLinks
SGLang is a fast serving framework for large language models and vision language models.
☆18Updated this week
Alternatives and similar repositories for sglang
Users that are interested in sglang are comparing it to the libraries listed below
Sorting:
- ☆96Updated 10 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆87Updated last month
- Toolchain built around the Megatron-LM for Distributed Training☆84Updated last month
- High Performance LLM Inference Operator Library☆222Updated last week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆90Updated 2 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆145Updated 8 months ago
- Pipeline Parallelism Emulation and Visualization☆76Updated 3 weeks ago
- Venus Collective Communication Library, supported by SII and Infrawaves.☆137Updated this week
- Fast and memory-efficient exact attention☆110Updated last week
- KV cache store for distributed LLM inference☆387Updated 2 months ago
- OneFlow Serving☆20Updated 9 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆170Updated last week
- DLBlas: clean and efficient kernels☆33Updated this week
- ☆47Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated this week
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- A simple calculation for LLM MFU.☆66Updated 4 months ago
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆81Updated 4 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆246Updated last week
- ☆22Updated last week
- ☆34Updated 11 months ago
- ☆340Updated 3 weeks ago
- A Triton JIT runtime and ffi provider in C++☆30Updated this week
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆109Updated last month
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆73Updated 8 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆102Updated last week
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆55Updated last year
- Allow torch tensor memory to be released and resumed later☆207Updated 2 weeks ago
- Offline optimization of your disaggregated Dynamo graph☆168Updated this week