kserve / modelmesh-serving
Controller for ModelMesh
☆229Updated last month
Alternatives and similar repositories for modelmesh-serving:
Users that are interested in modelmesh-serving are comparing it to the libraries listed below
- Distributed Model Serving Framework☆165Updated last month
- User documentation for KServe.☆106Updated last week
- ☆119Updated this week
- Kubeflow Pipelines on Tekton☆180Updated 5 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆421Updated this week
- Repository for open inference protocol specification☆54Updated 9 months ago
- JobSet: a k8s native API for distributed ML training and HPC workloads☆223Updated last week
- Gateway API Inference Extension☆268Updated this week
- Dynamic Resource Allocation (DRA) for NVIDIA GPUs in Kubernetes☆352Updated this week
- GPU plugin to the node feature discovery for Kubernetes☆300Updated 11 months ago
- KServe models web UI☆38Updated last week
- Unified runtime-adapter image of the sidecar containers which run in the modelmesh pods☆21Updated last month
- This is a fork/refactoring of the ajmyyra/ambassador-auth-oidc project☆88Updated last year
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆475Updated 2 weeks ago
- GenAI inference performance benchmarking tool☆41Updated this week
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆209Updated last week
- KServe community docs for contributions and process☆12Updated this week
- Argoflow has been superseded by deployKF☆137Updated last year
- Holistic job manager on Kubernetes☆115Updated last year
- MIG Partition Editor for NVIDIA GPUs☆198Updated last week
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆140Updated 2 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆199Updated last week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆531Updated this week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆97Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆474Updated 2 weeks ago
- markdown docs☆86Updated this week
- Fork of NVIDIA device plugin for Kubernetes with support for shared GPUs by declaring GPUs multiple times☆88Updated 2 years ago
- Run cloud native workloads on NVIDIA GPUs☆168Updated last week
- Device plugins for Volcano, e.g. GPU☆119Updated last month
- Docker for Your ML/DL Models Based on OCI Artifacts☆466Updated last year