llm-d / llm-d-inference-simLinks
A light weight vLLM simulator, for mocking out replicas.
☆48Updated last week
Alternatives and similar repositories for llm-d-inference-sim
Users that are interested in llm-d-inference-sim are comparing it to the libraries listed below
Sorting:
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆279Updated this week
- Inference scheduler for llm-d☆95Updated this week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆65Updated last week
- Distributed KV cache coordinator☆71Updated last week
- Cloud Native Benchmarking of Foundation Models☆42Updated 2 months ago
- Systematic and comprehensive benchmarks for LLM systems.☆36Updated last month
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆33Updated last week
- NVIDIA NCCL Tests for Distributed Training☆111Updated last week
- A tool to detect infrastructure issues on cloud native AI systems☆47Updated 2 weeks ago
- A workload for deploying LLM inference services on Kubernetes☆70Updated last week
- DeepSeek-V3/R1 inference performance simulator☆170Updated 6 months ago
- llm-d benchmark scripts and tooling☆28Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆648Updated this week
- An interference-aware scheduler for fine-grained GPU sharing☆147Updated 8 months ago
- Fast OS-level support for GPU checkpoint and restore☆238Updated this week
- Artifacts for our NSDI'23 paper TGS☆86Updated last year
- Stateful LLM Serving☆85Updated 6 months ago
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆257Updated 2 weeks ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆167Updated last year
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆85Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆111Updated 4 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- ☆85Updated 6 months ago
- KV cache store for distributed LLM inference☆336Updated 3 weeks ago
- kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.☆94Updated this week
- Efficient and easy multi-instance LLM serving☆491Updated 3 weeks ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆123Updated last month
- A toolkit for discovering cluster network topology.☆70Updated this week
- ☆21Updated 2 months ago
- ☆13Updated 5 months ago