sgl-project / genai-benchLinks
Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serving systems.
☆251Updated this week
Alternatives and similar repositories for genai-bench
Users that are interested in genai-bench are comparing it to the libraries listed below
Sorting:
- Allow torch tensor memory to be released and resumed later☆202Updated last week
- Perplexity GPU Kernels☆553Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆457Updated 7 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆375Updated this week
- A low-latency & high-throughput serving engine for LLMs☆467Updated last week
- torchcomms: a modern PyTorch communications API☆320Updated last week
- Efficient and easy multi-instance LLM serving☆521Updated 4 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆753Updated last week
- Materials for learning SGLang☆717Updated 2 weeks ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆356Updated this week
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆80Updated 4 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆261Updated last month
- A high-performance and light-weight router for vLLM large scale deployment☆82Updated 3 weeks ago
- ☆340Updated 2 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆626Updated last week
- Offline optimization of your disaggregated Dynamo graph☆151Updated this week
- ☆96Updated 9 months ago
- Stateful LLM Serving☆94Updated 10 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆542Updated last week
- JAX backend for SGL☆218Updated last week
- A minimal implementation of vllm.☆65Updated last year
- ☆124Updated last year
- Applied AI experiments and examples for PyTorch☆314Updated 4 months ago
- LLM Serving Performance Evaluation Harness☆82Updated 10 months ago
- Zero Bubble Pipeline Parallelism☆447Updated 8 months ago
- Toolchain built around the Megatron-LM for Distributed Training☆80Updated last month
- KV cache store for distributed LLM inference☆385Updated 2 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆198Updated this week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- Perplexity open source garden for inference technology☆332Updated 3 weeks ago