coreweave / ml-containersLinks
☆44Updated this week
Alternatives and similar repositories for ml-containers
Users that are interested in ml-containers are comparing it to the libraries listed below
Sorting:
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆124Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆286Updated 5 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Updated 4 months ago
- Helm charts for llm-d☆52Updated 6 months ago
- ☆278Updated 2 weeks ago
- ☆31Updated 9 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆162Updated this week
- High-performance safetensors model loader☆94Updated 3 weeks ago
- A top-like tool for monitoring GPUs in a cluster☆84Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆403Updated 3 weeks ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 4 months ago
- Cloud Native Benchmarking of Foundation Models☆44Updated 6 months ago
- A collection of reproducible inference engine benchmarks☆38Updated 9 months ago
- Benchmark suite for LLMs from Fireworks.ai☆89Updated this week
- Repository for open inference protocol specification☆63Updated 8 months ago
- CUDA checkpoint and restore utility☆406Updated 4 months ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- ☆61Updated last year
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated last week
- NVIDIA NCCL Tests for Distributed Training☆133Updated last week
- ☆321Updated last year
- The driver for LMCache core to run in vLLM☆60Updated last year
- The NVIDIA GPU driver container allows the provisioning of the NVIDIA driver through the use of containers.☆155Updated this week
- A lightweight, user-friendly data-plane for LLM training.☆38Updated 4 months ago
- A toolkit for discovering cluster network topology.☆96Updated this week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆142Updated last week
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆17Updated 3 years ago
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆472Updated this week
- ☆21Updated 11 months ago
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 4 months ago