AI-Hypercomputer / inference-benchmarkLinks
☆17Updated 7 months ago
Alternatives and similar repositories for inference-benchmark
Users that are interested in inference-benchmark are comparing it to the libraries listed below
Sorting:
- WG Serving☆34Updated last month
- GenAI inference performance benchmarking tool☆140Updated last week
- Helm charts for llm-d☆52Updated 6 months ago
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆142Updated this week
- llm-d helm charts and deployment examples☆48Updated last month
- Holistic job manager on Kubernetes☆115Updated last year
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆75Updated 6 months ago
- More Flexible Device Extension Capability in Kubernetes (DevicePlugins++)☆25Updated 2 years ago
- Incubating P/D sidecar for llm-d☆16Updated 2 months ago
- ☆40Updated this week
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12Updated 2 years ago
- A toolkit for discovering cluster network topology.☆93Updated this week
- ☆40Updated last week
- Inference scheduler for llm-d☆123Updated this week
- A set of system-oriented validators for kubeadm preflight checks.☆37Updated 3 months ago
- d.run website☆15Updated last week
- Documentation repository for NVIDIA Cloud Native Technologies☆35Updated this week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆34Updated last week
- Cloud Native Artifacial Intelligence Model Format Specification☆174Updated this week
- Prototypes and experiments for WG Device Management.☆13Updated 2 months ago
- Distributed KV cache scheduling & offloading libraries☆98Updated last week
- GPU analyzer for Kubernetes GPU clusters☆17Updated 5 years ago
- Example DRA driver that developers can fork and modify to get them started writing their own.☆112Updated last week
- Command-line tools for managing OCI model artifacts, which are bundled based on Model Spec☆60Updated this week
- Cloud Native Benchmarking of Foundation Models☆44Updated 5 months ago
- ☆71Updated last week
- ☆190Updated last week
- A simulator of Kuberntes for batch and service workload.☆50Updated 4 years ago
- Go Abstraction for Allocating NVIDIA GPUs with Custom Policies☆120Updated last month
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆360Updated this week