NVIDIA / libnvidia-container
NVIDIA container runtime library
☆846Updated this week
Related projects ⓘ
Alternatives and complementary repositories for libnvidia-container
- NVIDIA container runtime☆1,108Updated last year
- NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs☆415Updated this week
- GPU plugin to the node feature discovery for Kubernetes☆293Updated 5 months ago
- NVIDIA GPU metrics exporter for Prometheus leveraging DCGM☆924Updated this week
- Tools for monitoring NVIDIA GPUs on Linux☆1,018Updated 3 years ago
- Build and run containers leveraging NVIDIA GPUs☆2,472Updated this week
- MIG Partition Editor for NVIDIA GPUs☆174Updated this week
- NVIDIA GPU Operator creates, configures, and manages GPUs in Kubernetes☆1,854Updated this week
- NVIDIA device plugin for Kubernetes☆2,835Updated this week
- Tools for building GPU clusters☆1,265Updated 8 months ago
- AIStore: scalable storage for AI applications☆1,290Updated this week
- GPU Sharing Device Plugin for Kubernetes Cluster☆471Updated last year
- Kubernetes (k8s) device plugin to enable registration of AMD GPU to a container cluster☆273Updated this week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆440Updated last month
- A simple yet powerful tool to turn traditional container/OS images into unprivileged sandboxes.☆644Updated 3 weeks ago
- ☆504Updated 5 months ago
- ☆311Updated 6 months ago
- NVIDIA k8s device plugin for Kubevirt☆232Updated last month
- NCCL Tests☆898Updated 2 weeks ago
- Run cloud native workloads on NVIDIA GPUs☆134Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆570Updated this week
- ☆832Updated 7 months ago
- GPU Sharing Scheduler for Kubernetes Cluster☆1,415Updated 10 months ago
- Fork of NVIDIA device plugin for Kubernetes with support for shared GPUs by declaring GPUs multiple times☆88Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆433Updated last week
- Container plugin for Slurm Workload Manager☆294Updated 2 weeks ago
- Practical GPU Sharing Without Memory Size Constraints☆226Updated last month
- RDMA device plugin for Kubernetes☆203Updated 11 months ago
- Multi-GPU CUDA stress test☆1,435Updated 3 months ago
- Run your deep learning workloads on Kubernetes more easily and efficiently.☆510Updated 8 months ago