☆290Mar 19, 2026Updated last week
Alternatives and similar repositories for runai-model-streamer
Users that are interested in runai-model-streamer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- GPU Environment Management for JupyterLab☆26Feb 19, 2024Updated 2 years ago
- GPU environment and cluster management with LLM support☆658May 16, 2024Updated last year
- ☆232Updated this week
- High-performance safetensors model loader☆125Updated this week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,191Updated this week
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- GPU Environment Management for Visual Studio Code☆39Jul 19, 2023Updated 2 years ago
- A top-like tool for monitoring GPUs in a cluster☆84Feb 14, 2024Updated 2 years ago
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆176Updated this week
- ☆15Nov 4, 2025Updated 4 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and i…☆43Updated this week
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12May 16, 2023Updated 2 years ago
- Gateway API Inference Extension☆616Updated this week
- 💫 A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSI…☆26Dec 6, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Simplified Data Management and Sharing for Kubernetes☆18Mar 19, 2026Updated last week
- Module, Model, and Tensor Serialization/Deserialization☆296Feb 6, 2026Updated last month
- Container Object Storage Interface (COSI) provisioner responsible to interface with COSI drivers. NOTE: The content of this repo has bee…☆33Nov 26, 2024Updated last year
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆682Mar 21, 2026Updated last week
- CUDA checkpoint and restore utility☆431Sep 15, 2025Updated 6 months ago
- markdown docs☆95Feb 1, 2026Updated last month
- https://hf.co/hexgrad/Kokoro-82M☆14Jan 14, 2026Updated 2 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,411Updated this week
- ☆17Jul 18, 2025Updated 8 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Fast and memory-efficient exact attention☆21Mar 13, 2026Updated 2 weeks ago
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Mar 14, 2026Updated last week
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆404Updated this week
- WG Serving☆34Mar 5, 2026Updated 3 weeks ago
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆152Mar 19, 2026Updated last week
- Container Object Storage Interface (COSI) API responsible to define API for COSI objects. NOTE: The content of this repo has been moved t…☆69Nov 26, 2024Updated last year
- 🧬 The adaptive model routing system for exploration and exploitation.☆22Jan 4, 2026Updated 2 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- ☆55Aug 1, 2025Updated 7 months ago
- Custom Scheduler to deploy ML models to TRTIS for GPU Sharing☆11Apr 1, 2020Updated 5 years ago
- A workload for deploying LLM inference services on Kubernetes☆192Updated this week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,165Feb 23, 2026Updated last month
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆78Dec 3, 2024Updated last year
- OpenAI compatible API for open source LLMs☆17Oct 30, 2023Updated 2 years ago