☆297Mar 19, 2026Updated last month
Alternatives and similar repositories for runai-model-streamer
Users that are interested in runai-model-streamer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- GPU Environment Management for JupyterLab☆26Feb 19, 2024Updated 2 years ago
- GPU environment and cluster management with LLM support☆659May 16, 2024Updated last year
- ☆243Updated this week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,233Updated this week
- GPU Environment Management for Visual Studio Code☆39Jul 19, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- High-performance safetensors model loader☆133Updated this week
- A top-like tool for monitoring GPUs in a cluster☆85Feb 14, 2024Updated 2 years ago
- ☆15Apr 2, 2026Updated 2 weeks ago
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆194Updated this week
- Python client for the Run:ai REST API☆24Dec 15, 2025Updated 4 months ago
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12May 16, 2023Updated 2 years ago
- Gateway API Inference Extension☆639Apr 10, 2026Updated last week
- 💫 A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSI…☆26Dec 6, 2024Updated last year
- Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and i…☆53Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Simplified Data Management and Sharing for Kubernetes☆18Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆297Feb 6, 2026Updated 2 months ago
- Container Object Storage Interface (COSI) provisioner responsible to interface with COSI drivers. NOTE: The content of this repo has bee…☆33Nov 26, 2024Updated last year
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆697Updated this week
- CUDA checkpoint and restore utility☆437Sep 15, 2025Updated 7 months ago
- https://hf.co/hexgrad/Kokoro-82M☆14Jan 14, 2026Updated 3 months ago
- markdown docs☆96Feb 1, 2026Updated 2 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,570Updated this week
- ☆19Apr 12, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Fast and memory-efficient exact attention☆20Apr 10, 2026Updated last week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Mar 31, 2026Updated 2 weeks ago
- [WIP] Better (FP8) attention for Hopper☆33Feb 24, 2025Updated last year
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆422Updated this week
- WG Serving☆34Mar 24, 2026Updated 3 weeks ago
- "An optimizer custom node for ComfyUI that ensures each queue execution starts in an optimal state by clearing unused VRAM and unnecessar…☆19Jul 18, 2025Updated 9 months ago
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆153Apr 10, 2026Updated last week
- Container Object Storage Interface (COSI) API responsible to define API for COSI objects. NOTE: The content of this repo has been moved t…☆69Nov 26, 2024Updated last year
- Using short models to classify long texts☆21Mar 8, 2023Updated 3 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 11 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- ☆55Aug 1, 2025Updated 8 months ago
- Holistic job manager on Kubernetes☆116Feb 20, 2024Updated 2 years ago
- Custom Scheduler to deploy ML models to TRTIS for GPU Sharing☆11Apr 1, 2020Updated 6 years ago
- A workload for deploying LLM inference services on Kubernetes☆203Updated this week