kserve / modelmesh-servingLinks
Controller for ModelMesh
☆242Updated 7 months ago
Alternatives and similar repositories for modelmesh-serving
Users that are interested in modelmesh-serving are comparing it to the libraries listed below
Sorting:
- Distributed Model Serving Framework☆183Updated 4 months ago
- User documentation for KServe.☆109Updated this week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆161Updated last week
- Kubeflow Pipelines on Tekton☆182Updated last year
- Repository for open inference protocol specification☆64Updated 8 months ago
- JobSet: a k8s native API for distributed ML training and HPC workloads☆304Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆656Updated last week
- Gateway API Inference Extension☆576Updated this week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆510Updated 2 weeks ago
- GenAI inference performance benchmarking tool☆142Updated last week
- GPU plugin to the node feature discovery for Kubernetes☆308Updated last year
- NVIDIA DRA Driver for GPUs☆553Updated last week
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆204Updated 2 years ago
- KServe models web UI☆47Updated this week
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆222Updated last month
- Holistic job manager on Kubernetes☆116Updated last year
- markdown docs☆93Updated this week
- ☆191Updated 2 weeks ago
- Helm charts for the KubeRay project☆59Updated 2 months ago
- Argoflow has been superseded by deployKF☆134Updated 2 years ago
- Cloud-native way to provide elastic Jupyter Notebooks on Kubernetes. Run remote kernels, natively.☆204Updated 3 years ago
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,111Updated this week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆146Updated this week
- llm-d helm charts and deployment examples☆48Updated last month
- This is a fork/refactoring of the ajmyyra/ambassador-auth-oidc project☆89Updated last year
- Fork of NVIDIA device plugin for Kubernetes with support for shared GPUs by declaring GPUs multiple times☆87Updated 3 years ago
- Information about the Kubeflow community including proposals and governance information.☆183Updated this week
- Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elas…☆681Updated last year
- Docker for Your ML/DL Models Based on OCI Artifacts☆472Updated 2 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated last week