simon-mo / vLLM-BenchmarkLinks
☆29Updated 2 months ago
Alternatives and similar repositories for vLLM-Benchmark
Users that are interested in vLLM-Benchmark are comparing it to the libraries listed below
Sorting:
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆169Updated this week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆174Updated this week
- The driver for LMCache core to run in vLLM☆44Updated 5 months ago
- KV cache store for distributed LLM inference☆290Updated last month
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆95Updated 2 months ago
- LLM Serving Performance Evaluation Harness☆79Updated 4 months ago
- Fast and memory-efficient exact attention☆80Updated last week
- ☆55Updated 7 months ago
- DeeperGEMM: crazy optimized version☆69Updated 2 months ago
- High-performance safetensors model loader☆48Updated 2 weeks ago
- ☆44Updated 6 months ago
- ☆37Updated 7 months ago
- CUDA checkpoint and restore utility☆346Updated 5 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆168Updated 9 months ago
- ☆90Updated 3 months ago
- NVIDIA Inference Xfer Library (NIXL)☆473Updated this week
- Stateful LLM Serving☆76Updated 4 months ago
- ☆58Updated 10 months ago
- Perplexity GPU Kernels☆405Updated this week
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- Microsoft Collective Communication Library☆64Updated 7 months ago
- ☆47Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆123Updated last year
- ☆52Updated 4 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated last week
- PyTorch distributed training acceleration framework☆51Updated 5 months ago
- ☆83Updated 8 months ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆28Updated 3 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆48Updated 2 months ago
- A low-latency & high-throughput serving engine for LLMs☆390Updated last month