InftyAI / llmazLinks
βΈοΈ Easy, advanced inference platform for large language models on Kubernetes. π Star to support our work!
β267Updated last week
Alternatives and similar repositories for llmaz
Users that are interested in llmaz are comparing it to the libraries listed below
Sorting:
- A federation scheduler for multi-clusterβ56Updated last week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replicationβ611Updated last week
- Device-plugin for volcano vgpu which support hard resource isolationβ130Updated last month
- Device plugins for Volcano, e.g. GPUβ129Updated 7 months ago
- Large language model fine-tuning capabilities based on cloud native and distributed computing.β92Updated last year
- β162Updated 3 weeks ago
- HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in containerβ253Updated this week
- Gateway API Inference Extensionβ524Updated this week
- Using CRDs to manage GPU resources in Kubernetes.β209Updated 2 years ago
- π« A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSIβ¦β24Updated 11 months ago
- The limitless expansion of Kubernetes. Make Kubernetes without boundariesβ253Updated 4 months ago
- A workload for deploying LLM inference services on Kubernetesβ105Updated last week
- NVIDIA DRA Driver for GPUsβ482Updated this week
- π§― Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.β33Updated last week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.β71Updated 4 months ago
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.β144Updated 2 years ago
- a unified scheduler for online and offline tasksβ628Updated 7 months ago
- π An awesome & curated list of best LLMOps tools.β170Updated this week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)β312Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloadsβ279Updated this week
- Inference scheduler for llm-dβ103Updated last week
- The API (CRD) of Volcanoβ46Updated this week
- β309Updated this week
- agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.β296Updated this week
- GPUd automates monitoring, diagnostics, and issue identification for GPUsβ454Updated this week
- A Cloud-Native Service Catalog and Full Lifecycle Management Platform accross Multi-cloud and Edgeβ32Updated 2 years ago
- Go Abstraction for Allocating NVIDIA GPUs with Custom Policiesβ118Updated last week
- Katalyst aims to provide a universal solution to help improve resource utilization and optimize the overall costs in the cloud. This is tβ¦β522Updated this week
- The Volcano Deschedulerβ21Updated 9 months ago
- β122Updated 3 years ago