llm-d / llm-d-inference-simLinks
A light weight vLLM simulator, for mocking out replicas.
☆84Updated this week
Alternatives and similar repositories for llm-d-inference-sim
Users that are interested in llm-d-inference-sim are comparing it to the libraries listed below
Sorting:
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- Offline optimization of your disaggregated Dynamo graph☆168Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆760Updated 3 weeks ago
- Systematic and comprehensive benchmarks for LLM systems.☆50Updated this week
- A workload for deploying LLM inference services on Kubernetes☆167Updated this week
- NVIDIA NCCL Tests for Distributed Training☆133Updated this week
- Cloud Native Benchmarking of Foundation Models☆44Updated 6 months ago
- Inference scheduler for llm-d☆123Updated last week
- Stateful LLM Serving☆95Updated 10 months ago
- Artifacts for our NSDI'23 paper TGS☆94Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆844Updated last week
- Distributed KV cache scheduling & offloading libraries☆98Updated this week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆34Updated 2 weeks ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Updated 2 years ago
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 4 months ago
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆54Updated 5 months ago
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆88Updated last year
- Lightweight daemon for monitoring CUDA runtime API calls with eBPF uprobes☆146Updated 10 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances