NVIDIA / NVSentinelLinks
NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated computing environments
☆165Updated last week
Alternatives and similar repositories for NVSentinel
Users that are interested in NVSentinel are comparing it to the libraries listed below
Sorting:
- GenAI inference performance benchmarking tool☆141Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆300Updated this week
- Example DRA driver that developers can fork and modify to get them started writing their own.☆114Updated last week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆142Updated last week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆74Updated 6 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆654Updated last week
- Cloud Native Artifacial Intelligence Model Format Specification☆174Updated this week
- NVIDIA DRA Driver for GPUs☆553Updated this week
- A toolkit for discovering cluster network topology.☆93Updated this week
- Gateway API Inference Extension☆573Updated this week
- ☆191Updated last week
- Inference scheduler for llm-d☆123Updated last week
- Enabling Kubernetes to make pod placement decisions with platform intelligence.☆176Updated last year
- llm-d helm charts and deployment examples☆48Updated last month
- ☆209Updated this week
- Node Resource Interface☆355Updated last week
- Holistic job manager on Kubernetes☆115Updated last year
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,095Updated this week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆159Updated this week
- A collection of community maintained NRI plugins☆100Updated last week
- A workload for deploying LLM inference services on Kubernetes☆167Updated this week
- NVIDIA Network Operator☆319Updated this week
- K8s device plugin for GPU sharing☆98Updated 2 years ago
- Command-line tools for managing OCI model artifacts, which are bundled based on Model Spec☆60Updated last week
- Kubernetes-native AI serving platform for scalable model serving.☆173Updated this week
- WG Serving☆34Updated last month
- ☆122Updated 3 years ago
- CAPK is a provider for Cluster API (CAPI) that allows users to deploy fake, Kubemark-backed machines to their clusters.☆88Updated this week
- Simplified model deployment on llm-d☆28Updated 6 months ago
- A federation scheduler for multi-cluster☆61Updated this week