InftyAI / Awesome-LLMOpsLinks
π An awesome & curated list of best LLMOps tools.
β167Updated last month
Alternatives and similar repositories for Awesome-LLMOps
Users that are interested in Awesome-LLMOps are comparing it to the libraries listed below
Sorting:
- βΈοΈ Easy, advanced inference platform for large language models on Kubernetes. π Star to support our work!β267Updated this week
- agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.β135Updated last week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.β134Updated this week
- llm-d helm charts and deployment examplesβ46Updated last month
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replicationβ611Updated this week
- π« A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSIβ¦β24Updated 11 months ago
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.β71Updated 3 months ago
- GenAI inference performance benchmarking toolβ123Updated this week
- A diverse, simple, and secure all-in-one LLMOps platformβ109Updated last year
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)β307Updated last week
- WG Servingβ31Updated 3 weeks ago
- Gateway API Inference Extensionβ524Updated this week
- Large language model fine-tuning capabilities based on cloud native and distributed computing.β92Updated last year
- Command-line tools for managing OCI model artifacts, which are bundled based on Model Specβ47Updated this week
- Distributed KV cache coordinatorβ85Updated this week
- Example DRA driver that developers can fork and modify to get them started writing their own.β105Updated 2 weeks ago
- Extensible generative AI platform on Kubernetes with OpenAI-compatible APIs.β90Updated last month
- A workload for deploying LLM inference services on Kubernetesβ99Updated last week
- Inference scheduler for llm-dβ103Updated this week
- A federation scheduler for multi-clusterβ56Updated last week
- Cloud Native Artifacial Intelligence Model Format Specificationβ116Updated this week
- β66Updated this week
- Kubernetes Copilot powered by AI (OpenAI/Claude/Gemini/etc)β225Updated last week
- This is a landscape of the infrastructure that powers the generative AI ecosystemβ149Updated last year
- A toolkit for discovering cluster network topology.β81Updated this week
- π§― Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.β33Updated last week
- Device-plugin for volcano vgpu which support hard resource isolationβ128Updated last month
- MCP Server for kubernetes management and diagnose your cluster and applicationsβ26Updated 6 months ago
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, iβ¦β12Updated 2 years ago
- π’ Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! π«β223Updated this week