sgl-project / genai-benchView external linksLinks
Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serving systems.
☆266Updated this week
Alternatives and similar repositories for genai-bench
Users that are interested in genai-bench are comparing it to the libraries listed below
Sorting:
- Materials for learning SGLang☆743Jan 5, 2026Updated last month
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆370Updated this week
- ☆65Apr 26, 2025Updated 9 months ago
- Perplexity GPU Kernels☆560Nov 7, 2025Updated 3 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆683Updated this week
- Kernel Library Wheel for SGLang☆17Updated this week
- Fast and memory-efficient exact attention☆18Jan 23, 2026Updated 3 weeks ago
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 10 months ago
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- NVIDIA device plugin for Kubernetes☆15Sep 9, 2019Updated 6 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆441Feb 4, 2026Updated last week
- ☆13Jan 7, 2025Updated last year
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,701Updated this week
- vLLM performance dashboard☆41Apr 26, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆776Apr 6, 2025Updated 10 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated 3 weeks ago
- A TUI-based utility for real-time monitoring of InfiniBand traffic and performance metrics on the local node☆63Dec 19, 2025Updated last month
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Feb 2, 2026Updated last week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Dec 25, 2025Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,094Feb 7, 2026Updated last week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Allow torch tensor memory to be released and resumed later☆216Jan 13, 2026Updated last month
- Fastest kernels written from scratch☆533Sep 18, 2025Updated 4 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- JAX backend for SGL☆237Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,052Updated this week
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated last year
- Stateful LLM Serving☆95Mar 11, 2025Updated 11 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,208Feb 7, 2026Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆326Updated this week
- DeepSeek-V3/R1 inference performance simulator☆176Mar 27, 2025Updated 10 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458May 30, 2025Updated 8 months ago
- A low-latency & high-throughput serving engine for LLMs☆470Jan 8, 2026Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,888Updated this week
- My learning notes for ML SYS.☆5,306Jan 30, 2026Updated 2 weeks ago