llm-d / llm-d-inference-simLinks
A light weight vLLM simulator, for mocking out replicas.
☆58Updated last week
Alternatives and similar repositories for llm-d-inference-sim
Users that are interested in llm-d-inference-sim are comparing it to the libraries listed below
Sorting:
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆322Updated this week
- Inference scheduler for llm-d☆106Updated this week
- Distributed KV cache coordinator☆88Updated this week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆117Updated this week
- Offline optimization of your disaggregated Dynamo graph☆110Updated this week
- A workload for deploying LLM inference services on Kubernetes☆117Updated this week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆33Updated last week
- Cloud Native Benchmarking of Foundation Models☆44Updated 4 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆674Updated 3 weeks ago
- A toolkit for discovering cluster network topology.☆84Updated last week
- Systematic and comprehensive benchmarks for LLM systems.☆41Updated last week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆270Updated last week
- NVIDIA NCCL Tests for Distributed Training☆126Updated 2 weeks ago
- Lightweight daemon for monitoring CUDA runtime API calls with eBPF uprobes☆140Updated 8 months ago
- NVIDIA Inference Xfer Library (NIXL)☆729Updated last week
- ☆71Updated last week
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆87Updated last year
- Variant optimization autoscaler for distributed inference workloads☆21Updated last week
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 2 months ago
- Gateway API Inference Extension☆531Updated this week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆72Updated 4 months ago
- ☆168Updated last month
- llm-d benchmark scripts and tooling☆33Updated this week
- Resource Exporter for volcano scheduling, e.g. NUMA-Aware scheduling.☆18Updated 6 months ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Updated last year
- Fast OS-level support for GPU checkpoint and restore☆257Updated 2 months ago
- HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in container☆253Updated 2 weeks ago
- Artifacts for our NSDI'23 paper TGS☆90Updated last year
- DeepSeek-V3/R1 inference performance simulator☆168Updated 8 months ago
- Stateful LLM Serving☆89Updated 8 months ago