Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs
☆935Mar 21, 2026Updated this week
Alternatives and similar repositories for guidellm
Users that are interested in guidellm are comparing it to the libraries listed below
Sorting:
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,657Updated this week
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,227Updated this week
- ☆55Aug 1, 2025Updated 7 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Dec 4, 2025Updated 3 months ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,693Updated this week
- ☆334Updated this week
- Auto-tuning for vllm. Getting the best performance out of your LLM deployment (vllm+guidellm+optuna)☆50Mar 12, 2026Updated last week
- LLMPerf is a library for validating and benchmarking LLMs☆1,095Dec 9, 2024Updated last year
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆413Mar 3, 2026Updated 2 weeks ago
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆55Mar 16, 2026Updated last week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆397Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- KV cache store for distributed LLM inference☆399Nov 13, 2025Updated 4 months ago
- A tool for benchmarking LLMs on Modal☆50Aug 29, 2025Updated 6 months ago
- A Rust reimplementation of genai-bench for benchmarking LLM serving systems at high concurrency with accurate timing and industry-standar…☆279Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,739May 21, 2025Updated 10 months ago
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,682Updated this week
- KV cache compression for high-throughput LLM inference☆153Feb 5, 2025Updated last year
- Gateway API Inference Extension☆609Mar 15, 2026Updated last week
- Collection of demos for building Llama Stack based apps on OpenShift☆63Mar 14, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,958Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Mar 13, 2026Updated last week
- Agentic AI framework examples with Red Hat AI☆18Jul 2, 2025Updated 8 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆682Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆285Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- The AI Accelerator is a template project for setting up Red Hat OpenShift AI using GitOps☆66Updated this week
- Fast, Flexible and Portable Structured Generation☆1,586Mar 13, 2026Updated last week
- Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and i…☆40Updated this week
- Development containers for triton and triton-cpu☆24Mar 9, 2026Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,120Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆813Updated this week
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago