jd-opensource / xllm-serviceLinks
A flexible serving framework that delivers efficient and fault-tolerant LLM inference for clustered deployments.
☆80Updated last month
Alternatives and similar repositories for xllm-service
Users that are interested in xllm-service are comparing it to the libraries listed below
Sorting:
- ☆33Updated 10 months ago
- ☆135Updated this week
- ☆152Updated 11 months ago
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆118Updated 6 months ago
- Fast and memory-efficient exact attention☆104Updated this week
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆282Updated 3 months ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆70Updated 7 months ago
- FlagTree is a unified compiler supporting multiple AI chip backends for custom Deep Learning operations, which is forked from triton-lang…☆145Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆137Updated 7 months ago
- ☆130Updated 11 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- High performance Transformer implementation in C++.☆143Updated 10 months ago
- ☆102Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 3 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆90Updated this week
- ☆153Updated 9 months ago
- ☆47Updated last year
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Updated 2 years ago
- ☆23Updated 2 years ago
- llama 2 Inference☆43Updated 2 years ago
- ☆112Updated 6 months ago
- ☆97Updated 8 months ago
- ☆140Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Updated 9 months ago
- LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model☆60Updated last month
- ☆59Updated 4 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year