run-ai / genv
GPU environment and cluster management with LLM support
☆604Updated 11 months ago
Alternatives and similar repositories for genv:
Users that are interested in genv are comparing it to the libraries listed below
- ☆205Updated last month
- ClearML Fractional GPU - Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing☆77Updated 9 months ago
- A top-like tool for monitoring GPUs in a cluster☆86Updated last year
- Practical GPU Sharing Without Memory Size Constraints☆264Updated last month
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆789Updated 2 months ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆360Updated this week
- Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elas…☆656Updated last year
- ☆304Updated 8 months ago
- aim-mlflow integration☆210Updated last year
- markdown docs☆86Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆225Updated 2 months ago
- CUDA checkpoint and restore utility☆330Updated 3 months ago
- MIG Partition Editor for NVIDIA GPUs☆198Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆199Updated 2 weeks ago
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆531Updated this week
- NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs☆500Updated this week
- Controller for ModelMesh☆229Updated last month
- NVIDIA GPU metrics exporter for Prometheus leveraging DCGM☆1,174Updated 2 weeks ago
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆351Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆474Updated 2 weeks ago
- ClearML - Model-Serving Orchestration and Repository Solution☆150Updated 3 months ago
- GPU plugin to the node feature discovery for Kubernetes☆300Updated 11 months ago
- A library to analyze PyTorch traces.☆367Updated last week
- Pipeline Parallelism for PyTorch☆765Updated 8 months ago
- This repository contains tutorials and examples for Triton Inference Server☆692Updated 3 weeks ago
- PyTorch per step fault tolerance (actively under development)☆291Updated this week
- A simple yet powerful tool to turn traditional container/OS images into unprivileged sandboxes.☆741Updated 4 months ago
- Run cloud native workloads on NVIDIA GPUs☆168Updated last week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆179Updated last week
- Common source, scripts and utilities for creating Triton backends.☆318Updated this week