kserve / modelmesh-serving
Controller for ModelMesh
☆227Updated 2 weeks ago
Alternatives and similar repositories for modelmesh-serving:
Users that are interested in modelmesh-serving are comparing it to the libraries listed below
- Distributed Model Serving Framework☆159Updated 3 weeks ago
- User documentation for KServe.☆105Updated this week
- Kubeflow Pipelines on Tekton☆180Updated 4 months ago
- ☆115Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆376Updated last week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆215Updated last week
- Repository for open inference protocol specification☆53Updated 8 months ago
- KServe models web UI☆36Updated 2 weeks ago
- GPU plugin to the node feature discovery for Kubernetes☆299Updated 10 months ago
- Dynamic Resource Allocation (DRA) for NVIDIA GPUs in Kubernetes☆337Updated this week
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆207Updated last week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆471Updated 3 weeks ago
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆383Updated this week
- Gateway API Inference Extension☆203Updated this week
- Holistic job manager on Kubernetes☆114Updated last year
- KServe community docs for contributions and process☆12Updated 2 months ago
- MIG Partition Editor for NVIDIA GPUs☆192Updated last week
- GenAI inference performance benchmarking tool☆31Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆467Updated 3 weeks ago
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆92Updated last week
- This is a fork/refactoring of the ajmyyra/ambassador-auth-oidc project☆88Updated 11 months ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆199Updated 2 months ago
- Fork of NVIDIA device plugin for Kubernetes with support for shared GPUs by declaring GPUs multiple times☆88Updated 2 years ago
- InstaSlice Operator facilitates slicing of accelerators using stable APIs☆33Updated this week
- Automatic tuning for ML model deployment on Kubernetes☆81Updated 5 months ago
- NVIDIA Network Operator☆245Updated this week
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆204Updated last year
- Device plugins for Volcano, e.g. GPU☆117Updated 3 weeks ago
- markdown docs☆85Updated last week
- Docker for Your ML/DL Models Based on OCI Artifacts☆466Updated last year