llm-d-incubation / workload-variant-autoscalerLinks
Variant optimization autoscaler for distributed inference workloads
☆25Updated last week
Alternatives and similar repositories for workload-variant-autoscaler
Users that are interested in workload-variant-autoscaler are comparing it to the libraries listed below
Sorting:
- Example DRA driver that developers can fork and modify to get them started writing their own.☆114Updated last week
- Inference scheduler for llm-d☆123Updated last week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆74Updated 6 months ago
- JobSet: a k8s native API for distributed ML training and HPC workloads☆300Updated this week
- A collection of community maintained NRI plugins☆100Updated last week
- Simplified model deployment on llm-d☆28Updated 6 months ago
- ☆209Updated this week
- Enabling Kubernetes to make pod placement decisions with platform intelligence.☆176Updated last year
- Cloud Native Artifacial Intelligence Model Format Specification☆174Updated this week
- NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated compu…☆165Updated last week
- ☆34Updated last month
- Kubernetes-native AI serving platform for scalable model serving.☆173Updated this week
- Holistic job manager on Kubernetes☆115Updated last year
- InstaSlice Operator facilitates slicing of accelerators using stable APIs☆49Updated last week
- GenAI inference performance benchmarking tool☆141Updated this week
- ☆35Updated 5 months ago
- ☆279Updated last week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆287Updated this week
- llm-d helm charts and deployment examples☆48Updated last month
- NVIDIA Network Operator☆319Updated this week
- NVIDIA DRA Driver for GPUs☆553Updated this week
- Kubernetes Container Runtime Interface proxy service with hardware resource aware workload placement policies☆178Updated 6 months ago
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆142Updated last week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆654Updated last week
- Distributed KV cache scheduling & offloading libraries☆98Updated this week
- Helm charts for llm-d☆52Updated 6 months ago
- Provides deploy scripts and CSI for Lustre.☆14Updated 3 months ago
- A toolkit for discovering cluster network topology.☆93Updated this week
- CAPK is a provider for Cluster API (CAPI) that allows users to deploy fake, Kubemark-backed machines to their clusters.☆88Updated this week
- A light weight vLLM simulator, for mocking out replicas.☆84Updated this week