llm-d-incubation / llm-d-infraLinks
llm-d helm charts and deployment examples
☆45Updated 3 weeks ago
Alternatives and similar repositories for llm-d-infra
Users that are interested in llm-d-infra are comparing it to the libraries listed below
Sorting:
- GenAI inference performance benchmarking tool☆106Updated last week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆130Updated this week
- Example DRA driver that developers can fork and modify to get them started writing their own.☆94Updated last month
- InstaSlice Operator facilitates slicing of accelerators using stable APIs☆46Updated last week
- Gateway API Inference Extension☆501Updated this week
- Cloud Native Artifacial Intelligence Model Format Specification☆107Updated this week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆70Updated 3 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆601Updated this week
- NVIDIA DRA Driver for GPUs☆458Updated last week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆150Updated last week
- Simplified model deployment on llm-d☆27Updated 3 months ago
- WG Serving☆30Updated last week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆268Updated last week
- Holistic job manager on Kubernetes☆116Updated last year
- CAPK is a provider for Cluster API (CAPI) that allows users to deploy fake, Kubemark-backed machines to their clusters.☆80Updated 3 weeks ago
- Incubating P/D sidecar for llm-d☆16Updated last month
- ☆174Updated last week
- llm-d benchmark scripts and tooling☆30Updated this week
- ☆151Updated 2 weeks ago
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆260Updated last week
- Repository to demo GPU Sharing with Time Slicing, MPS, MIG and others☆49Updated last year
- agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.☆110Updated last week
- Helm charts for llm-d☆50Updated 3 months ago
- Inference scheduler for llm-d☆99Updated this week
- GPU plugin to the node feature discovery for Kubernetes☆305Updated last year
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆33Updated last week
- Distributed KV cache coordinator☆79Updated this week
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Updated 3 months ago
- d.run website☆15Updated this week
- ☆264Updated this week