coreweave / ml-containersLinks
☆42Updated this week
Alternatives and similar repositories for ml-containers
Users that are interested in ml-containers are comparing it to the libraries listed below
Sorting:
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 2 months ago
- Module, Model, and Tensor Serialization/Deserialization☆276Updated 3 months ago
- ☆267Updated last week
- High-performance safetensors model loader☆76Updated 2 weeks ago
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆113Updated last week
- ☆31Updated 7 months ago
- A collection of reproducible inference engine benchmarks☆38Updated 7 months ago
- Helm charts for llm-d☆50Updated 4 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 2 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated last week
- A top-like tool for monitoring GPUs in a cluster☆85Updated last year
- Repository for open inference protocol specification☆60Updated 6 months ago
- The driver for LMCache core to run in vLLM☆58Updated 10 months ago
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last week
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated 2 years ago
- CUDA checkpoint and restore utility☆393Updated 2 months ago
- MLFlow Deployment Plugin for Ray Serve☆46Updated 3 years ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆234Updated last week
- Simple dependency injection framework for Python☆21Updated last year
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆322Updated this week
- ☆316Updated last year
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆154Updated this week
- MLPerf™ logging library☆37Updated last month
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆17Updated 3 years ago
- xet client tech, used in huggingface_hub☆340Updated this week
- Pipeline parallelism for the minimalist☆37Updated 3 months ago
- The NVIDIA GPU driver container allows the provisioning of the NVIDIA driver through the use of containers.☆145Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆140Updated this week
- NVIDIA NCCL Tests for Distributed Training☆126Updated 3 weeks ago