run-ai / genvLinks
GPU environment and cluster management with LLM support
☆642Updated last year
Alternatives and similar repositories for genv
Users that are interested in genv are comparing it to the libraries listed below
Sorting:
- Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elas…☆672Updated last year
- Module, Model, and Tensor Serialization/Deserialization☆267Updated last month
- A top-like tool for monitoring GPUs in a cluster☆85Updated last year
- ☆255Updated 2 weeks ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆394Updated last week
- ClearML Fractional GPU - Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing☆80Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆212Updated 5 months ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆823Updated 2 months ago
- ☆278Updated 7 months ago
- Distributed Model Serving Framework☆177Updated 2 weeks ago
- Practical GPU Sharing Without Memory Size Constraints☆287Updated 6 months ago
- MIG Partition Editor for NVIDIA GPUs☆217Updated this week
- NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs☆598Updated last month
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆849Updated last week
- ☆315Updated last year
- Controller for ModelMesh☆237Updated 4 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆494Updated this week
- CUDA checkpoint and restore utility☆373Updated last month
- markdown docs☆94Updated this week
- Benchmark Suite for Deep Learning☆276Updated this week
- W&B Server is the self hosted version of Weights & Biases☆328Updated last week
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆438Updated this week
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,263Updated 7 months ago
- Run Slurm in Kubernetes☆292Updated this week
- Container plugin for Slurm Workload Manager☆386Updated 2 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆415Updated last week
- aim-mlflow integration☆221Updated 2 years ago
- Where GPUs get cooked 👩🍳🔥☆293Updated 3 weeks ago
- ClearML - Model-Serving Orchestration and Repository Solution☆157Updated 2 weeks ago
- ClearML Agent - ML-Ops made easy. ML-Ops scheduler & orchestration solution☆278Updated 2 months ago