ai-dynamo / aiperfLinks
AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solution.
☆71Updated this week
Alternatives and similar repositories for aiperf
Users that are interested in aiperf are comparing it to the libraries listed below
Sorting:
- NVIDIA Inference Xfer Library (NIXL)☆770Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆239Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆247Updated last week
- Offline optimization of your disaggregated Dynamo graph☆128Updated this week
- NVIDIA NCCL Tests for Distributed Training☆129Updated this week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆334Updated this week
- ☆125Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- A low-latency & high-throughput serving engine for LLMs☆456Updated 2 months ago
- Efficient and easy multi-instance LLM serving☆517Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆448Updated 6 months ago
- Perplexity GPU Kernels☆539Updated last month
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆713Updated 2 weeks ago
- Cloud Native Benchmarking of Foundation Models☆44Updated 4 months ago
- Systematic and comprehensive benchmarks for LLM systems.☆44Updated 3 weeks ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆133Updated last year
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆349Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆749Updated 8 months ago
- Allow torch tensor memory to be released and resumed later☆187Updated 2 weeks ago
- A tool for bandwidth measurements on NVIDIA GPUs.☆583Updated 8 months ago
- ☆58Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- torchcomms: a modern PyTorch communications API☆302Updated this week
- CUDA checkpoint and restore utility☆396Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆118Updated 7 months ago
- KV cache store for distributed LLM inference☆372Updated last month
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- Stateful LLM Serving☆89Updated 9 months ago
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 3 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆160Updated this week