coreweave / ml-containersLinks
☆44Updated last week
Alternatives and similar repositories for ml-containers
Users that are interested in ml-containers are comparing it to the libraries listed below
Sorting:
- Module, Model, and Tensor Serialization/Deserialization☆286Updated 5 months ago
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆124Updated this week
- ☆280Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Updated 4 months ago
- ☆31Updated 9 months ago
- Helm charts for llm-d☆52Updated 6 months ago
- High-performance safetensors model loader☆99Updated 3 weeks ago
- CUDA checkpoint and restore utility☆410Updated 4 months ago
- A top-like tool for monitoring GPUs in a cluster☆84Updated last year
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆162Updated last week
- This is a landscape of the infrastructure that powers the generative AI ecosystem☆151Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 4 months ago
- A collection of reproducible inference engine benchmarks☆38Updated 9 months ago
- ☆60Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆404Updated last month
- Repository for open inference protocol specification☆64Updated 8 months ago
- Benchmark suite for LLMs from Fireworks.ai☆89Updated last week
- vLLM adapter for a TGIS-compatible gRPC server.☆50Updated this week
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated 2 years ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- The driver for LMCache core to run in vLLM☆60Updated last year
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆146Updated this week
- Simple dependency injection framework for Python☆21Updated last year
- An experimental implementation of compiler-driven automatic sharding of models across a given device mesh.☆52Updated this week
- ☆61Updated last year
- ☆13Updated 2 years ago
- MLFlow Deployment Plugin for Ray Serve☆46Updated 3 years ago
- This repository contains statistics about the AI Infrastructure products.☆17Updated 11 months ago