llm-d / llm-d-inference-simLinks
A light weight vLLM simulator, for mocking out replicas.
☆65Updated this week
Alternatives and similar repositories for llm-d-inference-sim
Users that are interested in llm-d-inference-sim are comparing it to the libraries listed below
Sorting:
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆341Updated this week
- Offline optimization of your disaggregated Dynamo graph☆128Updated this week
- Inference scheduler for llm-d☆110Updated this week
- Systematic and comprehensive benchmarks for LLM systems.☆44Updated 3 weeks ago
- Distributed KV cache coordinator☆92Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆713Updated 3 weeks ago
- A workload for deploying LLM inference services on Kubernetes☆140Updated this week
- Artifacts for our NSDI'23 paper TGS☆93Updated last year
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆131Updated this week
- Kubernetes-native AI serving platform for scalable model serving.☆93Updated this week
- A toolkit for discovering cluster network topology.☆86Updated last week
- An interference-aware scheduler for fine-grained GPU sharing☆154Updated 3 weeks ago
- NVIDIA NCCL Tests for Distributed Training☆129Updated this week
- Cloud Native Benchmarking of Foundation Models☆44Updated 4 months ago
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆87Updated last year
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 3 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆133Updated last year
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Updated 2 years ago
- llm-d benchmark scripts and tooling☆39Updated this week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆33Updated last week
- Fast OS-level support for GPU checkpoint and restore☆261Updated 2 months ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆141Updated last month
- NVIDIA Inference Xfer Library (NIXL)☆770Updated last week
- Efficient and easy multi-instance LLM serving☆517Updated 3 months ago
- Serverless Paper Reading and Discussion☆38Updated 2 years ago
- KV cache store for distributed LLM inference☆376Updated last month
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆49Updated 4 months ago
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆464Updated last week
- Stateful LLM Serving☆90Updated 9 months ago
- DeepSeek-V3/R1 inference performance simulator☆172Updated 8 months ago