llm-d / llm-d-deployerLinks
Helm charts for llm-d
☆50Updated 2 months ago
Alternatives and similar repositories for llm-d-deployer
Users that are interested in llm-d-deployer are comparing it to the libraries listed below
Sorting:
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆129Updated this week
- GenAI inference performance benchmarking tool☆97Updated last week
- WG Serving☆30Updated 3 weeks ago
- Inference scheduler for llm-d☆95Updated this week
- Distributed KV cache coordinator☆71Updated last week
- ☆168Updated 3 weeks ago
- llm-d helm charts and deployment examples☆42Updated last week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆262Updated this week
- ☆39Updated this week
- A toolkit for discovering cluster network topology.☆70Updated this week
- Simplified model deployment on llm-d☆27Updated 2 months ago
- ☆19Updated this week
- KJob: Tool for CLI-loving ML researchers☆39Updated last week
- Incubating P/D sidecar for llm-d☆16Updated last week
- Holistic job manager on Kubernetes☆116Updated last year
- InstaSlice facilitates the use of Dynamic Resource Allocation (DRA) on Kubernetes clusters for GPU sharing☆30Updated 10 months ago
- ☆254Updated last week
- K8s device plugin for GPU sharing☆99Updated 2 years ago
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆150Updated this week
- Cloud Native Artifacial Intelligence Model Format Specification☆100Updated this week
- ☆40Updated 2 weeks ago
- llm-d benchmark scripts and tooling☆28Updated this week
- Smart Kubernetes Scheduling☆81Updated this week
- Example DRA driver that developers can fork and modify to get them started writing their own.☆92Updated 2 weeks ago
- InstaSlice Operator facilitates slicing of accelerators using stable APIs☆45Updated this week
- Gateway API Inference Extension☆486Updated this week
- agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.☆84Updated this week
- ☆49Updated 2 months ago
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆279Updated this week
- Model Server for Kepler☆28Updated 2 months ago