clearml / clearml-fractional-gpuLinks
ClearML Fractional GPU - Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing
☆78Updated 10 months ago
Alternatives and similar repositories for clearml-fractional-gpu
Users that are interested in clearml-fractional-gpu are comparing it to the libraries listed below
Sorting:
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆205Updated 2 months ago
- Self-host LLMs with vLLM and BentoML☆123Updated this week
- ☆221Updated this week
- A top-like tool for monitoring GPUs in a cluster☆84Updated last year
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆98Updated last week
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 11 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆32Updated this week
- ☆18Updated 10 months ago
- Module, Model, and Tensor Serialization/Deserialization☆240Updated last week
- Inference server benchmarking tool☆74Updated 2 months ago
- GPU environment and cluster management with LLM support☆612Updated last year
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆64Updated 2 weeks ago
- Distributed Model Serving Framework☆170Updated 3 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆358Updated this week
- The Triton backend for TensorRT.☆77Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated last month
- The Triton backend for the ONNX Runtime.☆152Updated last week
- ☆54Updated 7 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆477Updated 2 weeks ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated 8 months ago
- Repository for open inference protocol specification☆56Updated last month
- OpenAI compatible API for TensorRT LLM triton backend☆209Updated 10 months ago
- Controller for ModelMesh☆232Updated 2 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆152Updated last week
- ☆310Updated 10 months ago
- Easy and Efficient Quantization for Transformers☆199Updated 4 months ago
- Where GPUs get cooked 👩🍳🔥☆234Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 8 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆67Updated last year
- ☆62Updated 2 months ago