Offline optimization of your disaggregated Dynamo graph
☆195Mar 1, 2026Updated this week
Alternatives and similar repositories for aiconfigurator
Users that are interested in aiconfigurator are comparing it to the libraries listed below
Sorting:
- Simplified Data Management and Sharing for Kubernetes☆17Updated this week
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 10 months ago
- ⚡ Guidance, samples, and tools for HPC workloads on AKS clusters with RDMA and InfiniBand support, including GPUDirect RDMA.☆21Updated this week
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 3 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆384Updated this week
- Simulating Distributed Training at Scale☆14Sep 15, 2025Updated 5 months ago
- Open Source Continuous Inference Benchmarking Qwen3.5, DeepSeek, GPTOSS - GB200 NVL72 vs MI355X vs B200 vs GB300 NVL72 vs H100 & soon™ TP…☆623Updated this week
- [ICML 2025] RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression☆34Aug 7, 2025Updated 6 months ago
- Benchmark SGLang on SLURM☆21Updated this week
- llm-d helm charts and deployment examples☆50Feb 26, 2026Updated last week
- ☆12Jul 24, 2024Updated last year
- The Intelligent Inference Scheduler for Large-scale Inference Services.☆64Feb 12, 2026Updated 3 weeks ago
- ☆13Jan 7, 2025Updated last year
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- Slowdown prediction module of Echo: Simulating Distributed Training at Scale☆13May 17, 2025Updated 9 months ago
- GaussDB driver and toolkit for Go☆14Dec 17, 2025Updated 2 months ago
- An opensource icon generation tool based on OpenAI gpt-image-1.☆14Nov 3, 2025Updated 4 months ago
- 💫 A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSI…☆26Dec 6, 2024Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆898Feb 28, 2026Updated last week
- LLM-Inference-Bench☆60Jul 18, 2025Updated 7 months ago
- ☆31Apr 19, 2025Updated 10 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆673Feb 26, 2026Updated last week
- Incubating P/D sidecar for llm-d☆16Nov 13, 2025Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Feb 11, 2026Updated 3 weeks ago
- ☆88May 31, 2025Updated 9 months ago
- ☆68Jun 23, 2025Updated 8 months ago
- LLM serving cluster simulator☆135Apr 25, 2024Updated last year
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Updated this week
- GenAI inference performance benchmarking tool☆151Feb 27, 2026Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- helm repo add daocloud https://daocloud.github.io/dce-charts-repackage/☆12Updated this week
- ☆16Nov 24, 2025Updated 3 months ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 3 weeks ago
- NVIDIA's launch, startup, and logging scripts used by our MLPerf Training and HPC submissions☆35Sep 12, 2025Updated 5 months ago
- KV cache store for distributed LLM inference☆396Nov 13, 2025Updated 3 months ago
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,187Feb 27, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Feb 18, 2026Updated 2 weeks ago
- RDMA core userspace libraries and daemons☆15Feb 16, 2026Updated 2 weeks ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆275Updated this week