LLMPerf is a library for validating and benchmarking LLMs
☆1,091Dec 9, 2024Updated last year
Alternatives and similar repositories for llmperf
Users that are interested in llmperf are comparing it to the libraries listed below
Sorting:
- ☆476Jan 10, 2024Updated 2 years ago
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,267Mar 13, 2025Updated 11 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,919Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,618Updated this week
- Large Language Model Text Generation Inference☆10,788Jan 8, 2026Updated last month
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,938Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- Benchmark suite for LLMs from Fireworks.ai☆94Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,095Jun 30, 2025Updated 8 months ago
- Serving multiple LoRA finetuned LLM as one☆1,145May 8, 2024Updated last year
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,728May 21, 2025Updated 9 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated 2 weeks ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- LLM Inference benchmark☆433Jul 23, 2024Updated last year
- The Triton TensorRT-LLM Backend☆926Feb 20, 2026Updated last week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,787Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,443Jul 17, 2025Updated 7 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆880Updated this week
- Supercharge Your LLM with the Fastest KV Cache Layer☆6,923Updated this week
- ☆331Feb 9, 2026Updated 3 weeks ago
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,187Updated this week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆9,516Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆329Sep 25, 2025Updated 5 months ago
- ☆61Sep 17, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,393Updated this week
- Supercharge Your LLM Application Evaluations 🚀☆12,736Updated this week
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,650Updated this week
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆898Updated this week
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,688Feb 5, 2026Updated 3 weeks ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,027Apr 11, 2025Updated 10 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- Fast and memory-efficient exact attention☆22,361Updated this week