ai-dynamo / modelexpressLinks
Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and improve overall performance.
☆25Updated this week
Alternatives and similar repositories for modelexpress
Users that are interested in modelexpress are comparing it to the libraries listed below
Sorting:
- GenAI inference performance benchmarking tool☆142Updated last week
- Distributed KV cache scheduling & offloading libraries☆101Updated last week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆74Updated 6 months ago
- WG Serving☆34Updated last month
- llm-d helm charts and deployment examples☆48Updated last month
- A toolkit for discovering cluster network topology.☆96Updated this week
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12Updated 2 years ago
- Inference scheduler for llm-d☆127Updated this week
- NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated compu…☆177Updated this week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆146Updated last week
- Holistic job manager on Kubernetes☆116Updated last year
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆662Updated last week
- Device-plugin for volcano vgpu which support hard resource isolation☆143Updated last month
- Example DRA driver that developers can fork and modify to get them started writing their own.☆117Updated last week
- Kubernetes-native AI serving platform for scalable model serving.☆208Updated this week
- A workload for deploying LLM inference services on Kubernetes☆168Updated last week
- A federation scheduler for multi-cluster☆61Updated last week
- ☆212Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆308Updated this week
- ☆122Updated 3 years ago
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Updated this week
- Provides deploy scripts and CSI for Lustre.☆14Updated 3 months ago
- Go Abstraction for Allocating NVIDIA GPUs with Custom Policies☆121Updated 2 months ago
- d.run website☆15Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆287Updated 2 weeks ago
- Incubating P/D sidecar for llm-d☆16Updated 2 months ago
- ☆334Updated last week
- Command-line tools for managing OCI model artifacts, which are bundled based on Model Spec☆61Updated last week
- Device plugins for Volcano, e.g. GPU☆132Updated 10 months ago