kubeflow / kfserving-lts
☆10Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for kfserving-lts
- Documentation repository for NVIDIA Cloud Native Technologies☆17Updated this week
- A utility for stressing GPUs by driving utilization (and thus power consumption) up and down in user-defined cycle intervals. It will als…☆22Updated last year
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆78Updated this week
- TAO Toolkit deep learning networks with TensorFlow 1.x backend☆11Updated 9 months ago
- Common source, scripts and utilities shared across all Triton repositories.☆62Updated this week
- ☆41Updated 6 months ago
- Kubeflow Notebooks lets you run web-based development environments on your Kubernetes cluster by running them inside Pods.☆18Updated 2 weeks ago
- The Triton backend for TensorFlow.☆45Updated this week
- Simulated large clusters for Kubernetes scheduler validation.☆15Updated last year
- Kubernetes device plugin supporting FPGA and other accelerators☆11Updated 5 years ago
- ☆25Updated this week
- A collection of useful Go libraries to ease the development of NVIDIA Operators for GPU/NIC management.☆18Updated this week
- Notes and artifacts from the ONNX steering committee☆25Updated last week
- Ubuntu kernels which are optimized for NVIDIA server systems☆25Updated this week
- Machine Learning Inference Graph Spec☆21Updated 5 years ago
- Common APIs and libraries shared by other Kubeflow operator repositories.☆51Updated last year
- MLFlow Deployment Plugin for Ray Serve☆42Updated 2 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆33Updated last year
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆18Updated 2 years ago
- Large Language Model Text Generation Inference on Habana Gaudi☆27Updated this week
- Ampere CentOS kernel☆18Updated 4 months ago
- The NVIDIA Driver Manager is a Kubernetes component which assist in seamless upgrades of NVIDIA Driver on each node of the cluster.☆33Updated this week
- CloudAI Benchmark Framework☆38Updated this week
- Repository for ONNX working group artifacts☆23Updated 3 weeks ago
- SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi☆37Updated last year
- ☆28Updated this week
- The NVIDIA GPU driver container allows the provisioning of the NVIDIA driver through the use of containers.☆74Updated this week
- The core library and APIs implementing the Triton Inference Server.☆105Updated this week
- RDMA CNI plugin for containerized workloads☆41Updated 2 months ago
- The Triton backend for the PyTorch TorchScript models.☆127Updated this week