ray-project / llmperfLinks
LLMPerf is a library for validating and benchmarking LLMs
☆1,081Updated last year
Alternatives and similar repositories for llmperf
Users that are interested in llmperf are comparing it to the libraries listed below
Sorting:
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆819Updated this week
- The Triton TensorRT-LLM Backend☆917Updated last week
- Serving multiple LoRA finetuned LLM as one☆1,134Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,615Updated last week
- ☆327Updated last week
- Fast, Flexible and Portable Structured Generation☆1,511Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆943Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,179Updated 4 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,091Updated 7 months ago
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,264Updated 10 months ago
- A high-performance inference system for large language models, designed for production environments.☆491Updated last month
- Comparison of Language Model Inference Engines☆239Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,896Updated 2 years ago
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,122Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,309Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆992Updated last year
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆989Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆327Updated 4 months ago
- ☆477Updated 2 years ago
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- Materials for learning SGLang☆728Updated 3 weeks ago
- LLM Inference benchmark☆433Updated last year
- Serverless LLM Serving for Everyone.☆640Updated last week
- This repository contains tutorials and examples for Triton Inference Server☆815Updated last week
- Minimalistic large language model 3D-parallelism training☆2,497Updated last month
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆830Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Updated 10 months ago
- ☆61Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,279Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆907Updated last month