ai-dynamo / aiperfLinks
AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solution.
☆126Updated this week
Alternatives and similar repositories for aiperf
Users that are interested in aiperf are comparing it to the libraries listed below
Sorting:
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated last week
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- Perplexity GPU Kernels☆560Updated 3 months ago
- torchcomms: a modern PyTorch communications API☆330Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆391Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆262Updated this week
- A low-latency & high-throughput serving engine for LLMs☆470Updated last month
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated last week
- ☆134Updated last week
- A high-performance and light-weight router for vLLM large scale deployment☆112Updated this week
- NVIDIA NCCL Tests for Distributed Training☆134Updated 2 weeks ago
- Perplexity open source garden for inference technology☆362Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated last week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆228Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆462Updated last month
- Efficient and easy multi-instance LLM serving☆524Updated 5 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆228Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆773Updated this week
- Offline optimization of your disaggregated Dynamo graph☆184Updated this week
- KV cache store for distributed LLM inference☆390Updated 2 months ago
- Systematic and comprehensive benchmarks for LLM systems.☆50Updated 2 weeks ago
- Microsoft Collective Communication Library☆66Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆462Updated this week
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- ☆61Updated last year
- The driver for LMCache core to run in vLLM☆60Updated last year
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- CUDA checkpoint and restore utility☆410Updated 4 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 5 months ago