LLM Inference benchmark
☆432Jul 23, 2024Updated last year
Alternatives and similar repositories for llm-inference-benchmark
Users that are interested in llm-inference-benchmark are comparing it to the libraries listed below
Sorting:
- LLMPerf is a library for validating and benchmarking LLMs☆1,090Dec 9, 2024Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,880Updated this week
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,040Feb 27, 2026Updated last week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,645Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,931Updated this week
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆4,171Updated this week
- Benchmark suite for LLMs from Fireworks.ai☆95Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,216Updated this week
- Evaluation for AI apps and agent☆44Jan 18, 2024Updated 2 years ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,993Updated this week
- A collection of reproducible inference engine benchmarks☆38Apr 22, 2025Updated 10 months ago
- Materials for learning SGLang☆766Jan 5, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,101Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 7 months ago
- ☆979Feb 7, 2025Updated last year
- ☆2,502Feb 13, 2026Updated 3 weeks ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆620Sep 11, 2024Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,059Updated this week
- LLM 推理服务性能测试☆44Dec 17, 2023Updated 2 years ago
- Retrieval and Retrieval-augmented LLMs☆11,352Dec 15, 2025Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,756Jan 24, 2026Updated last month
- Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.☆12,148Mar 2, 2026Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆614Nov 24, 2025Updated 3 months ago
- ☆19Apr 11, 2024Updated last year
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 10 months ago
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆654Aug 17, 2024Updated last year
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,705Feb 27, 2026Updated last week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,426Jun 2, 2025Updated 9 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,191Sep 30, 2025Updated 5 months ago
- ☆437Sep 18, 2025Updated 5 months ago
- Large Language Model Text Generation Inference☆10,795Jan 8, 2026Updated 2 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- High-Performance Linpack Benchmark adopted version for GPU backend☆12Sep 12, 2022Updated 3 years ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,966Updated this week
- ☆56Nov 18, 2024Updated last year
- A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation and performance benchmarking.☆2,463Updated this week