llm-d / llm-d-inference-schedulerLinks
Inference scheduler for llm-d
☆99Updated last week
Alternatives and similar repositories for llm-d-inference-scheduler
Users that are interested in llm-d-inference-scheduler are comparing it to the libraries listed below
Sorting:
- A toolkit for discovering cluster network topology.☆72Updated last week
- Distributed KV cache coordinator☆78Updated last week
- GenAI inference performance benchmarking tool☆105Updated this week
- Simplified model deployment on llm-d☆27Updated 3 months ago
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆70Updated 3 months ago
- Example DRA driver that developers can fork and modify to get them started writing their own.☆94Updated last month
- JobSet: a k8s native API for distributed ML training and HPC workloads☆266Updated last week
- agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.☆110Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆260Updated last week
- Gateway API Inference Extension☆495Updated this week
- Holistic job manager on Kubernetes☆116Updated last year
- ☆151Updated 2 weeks ago
- WG Serving☆30Updated this week
- Command-line tools for managing OCI model artifacts, which are bundled based on Model Spec☆45Updated this week
- Incubating P/D sidecar for llm-d☆16Updated 3 weeks ago
- A light weight vLLM simulator, for mocking out replicas.☆52Updated 3 weeks ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆601Updated this week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆131Updated last week
- 💫 A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSI…☆24Updated 10 months ago
- Go Abstraction for Allocating NVIDIA GPUs with Custom Policies☆116Updated 3 weeks ago
- Cloud Native Artifacial Intelligence Model Format Specification☆107Updated this week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆71Updated last week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆33Updated last week
- llm-d helm charts and deployment examples☆43Updated 2 weeks ago
- Golang bindings for Nvidia Datacenter GPU Manager (DCGM)☆134Updated last week
- A collection of community maintained NRI plugins☆93Updated last month
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Updated 3 months ago
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆292Updated this week
- All the things to make the scheduler extendable with wasm.☆127Updated 4 months ago
- Helm charts for llm-d☆50Updated 3 months ago