coreweave / ml-containersLinks
☆38Updated 2 weeks ago
Alternatives and similar repositories for ml-containers
Users that are interested in ml-containers are comparing it to the libraries listed below
Sorting:
- ☆238Updated last week
- Helm charts for llm-d☆50Updated last month
- Module, Model, and Tensor Serialization/Deserialization☆256Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 3 months ago
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆107Updated last week
- High-performance safetensors model loader☆53Updated last month
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆137Updated last week
- A top-like tool for monitoring GPUs in a cluster☆85Updated last year
- CUDA checkpoint and restore utility☆360Updated 6 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆35Updated this week
- A collection of reproducible inference engine benchmarks☆32Updated 4 months ago
- Benchmark suite for LLMs from Fireworks.ai☆79Updated 3 weeks ago
- Repository for open inference protocol specification☆59Updated 3 months ago
- xet client tech, used in huggingface_hub☆171Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆128Updated last week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆226Updated this week
- Cloud Native Benchmarking of Foundation Models☆40Updated 3 weeks ago
- ☆31Updated 4 months ago
- ☆55Updated 9 months ago
- The driver for LMCache core to run in vLLM☆47Updated 6 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆369Updated 2 months ago
- ☆314Updated last year
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆124Updated this week
- GPU Environment Management for Visual Studio Code☆39Updated 2 years ago
- NVIDIA NCCL Tests for Distributed Training☆105Updated this week
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆18Updated 3 years ago
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆413Updated this week
- MLFlow Deployment Plugin for Ray Serve☆46Updated 3 years ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆190Updated this week
- Custom Scheduler to deploy ML models to TRTIS for GPU Sharing☆12Updated 5 years ago