ai-dynamo / aiconfiguratorView external linksLinks
Offline optimization of your disaggregated Dynamo graph
☆184Updated this week
Alternatives and similar repositories for aiconfigurator
Users that are interested in aiconfigurator are comparing it to the libraries listed below
Sorting:
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 9 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆370Updated this week
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 2 months ago
- ⚡ Guidance, samples, and tools for HPC workloads on AKS clusters with RDMA and InfiniBand support, including GPUDirect RDMA.☆20Feb 4, 2026Updated last week
- Simulating Distributed Training at Scale☆14Sep 15, 2025Updated 4 months ago
- A workload for deploying LLM inference services on Kubernetes☆170Updated this week
- Open Source Continuous Inference Benchmarking - GB200 NVL72 vs MI355X vs B200 vs GB300 NVL72 vs H100 & soon™ TPUv6e/v7/Trainium2/3- DeepS…☆455Updated this week
- llm-d helm charts and deployment examples☆48Feb 7, 2026Updated last week
- [ICML 2025] RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression☆32Aug 7, 2025Updated 6 months ago
- ☆12Jul 24, 2024Updated last year
- The Intelligent Inference Scheduler for Large-scale Inference Services.☆61Feb 2, 2026Updated last week
- ☆13Jan 7, 2025Updated last year
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12May 16, 2023Updated 2 years ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,052Updated this week
- An opensource icon generation tool based on OpenAI gpt-image-1.☆14Nov 3, 2025Updated 3 months ago
- Slowdown prediction module of Echo: Simulating Distributed Training at Scale☆13May 17, 2025Updated 8 months ago
- d.run website☆15Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- LLM-Inference-Bench☆59Jul 18, 2025Updated 6 months ago
- ☆31Apr 19, 2025Updated 9 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆662Feb 2, 2026Updated last week
- Incubating P/D sidecar for llm-d☆16Nov 13, 2025Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Feb 6, 2026Updated last week
- ☆42Jan 24, 2026Updated 3 weeks ago
- ☆66Jun 23, 2025Updated 7 months ago
- ☆88May 31, 2025Updated 8 months ago
- LLM serving cluster simulator☆135Apr 25, 2024Updated last year
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Feb 5, 2026Updated last week
- GenAI inference performance benchmarking tool☆145Feb 6, 2026Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆776Apr 6, 2025Updated 10 months ago
- helm repo add daocloud https://daocloud.github.io/dce-charts-repackage/☆12Updated this week
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- A large-scale simulation framework for LLM inference☆530Jul 25, 2025Updated 6 months ago
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,156Feb 6, 2026Updated last week
- RDMA core userspace libraries and daemons☆15Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Feb 4, 2026Updated last week
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆49Jul 15, 2025Updated 6 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆266Updated this week
- ☆16Feb 5, 2024Updated 2 years ago