Cost-efficient and pluggable Infrastructure components for GenAI inference
☆4,682Mar 19, 2026Updated this week
Alternatives and similar repositories for aibrix
Users that are interested in aibrix are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,227Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- Gateway API Inference Extension☆616Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆682Updated this week
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,745Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Updated this week
- A toolkit to run Ray applications on Kubernetes☆2,388Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, Slurm, 20+ cl…☆9,664Updated this week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,216Updated this week
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,770Mar 9, 2026Updated 2 weeks ago
- KV cache store for distributed LLM inference☆399Nov 13, 2025Updated 4 months ago
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,657Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,169Updated this week
- DeepEP: an efficient expert-parallel communication library☆9,053Feb 9, 2026Updated last month
- Heterogeneous GPU Sharing on Kubernetes☆3,110Updated this week
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,972May 15, 2025Updated 10 months ago
- Composable building blocks to build LLM Apps☆8,301Updated this week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,165Feb 23, 2026Updated last month
- Unsloth Studio is a web UI for training and running open models like Qwen, DeepSeek, gpt-oss and Gemma locally.☆57,673Updated this week
- A Cloud Native Batch System (Project under CNCF)☆5,395Updated this week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,181Mar 17, 2026Updated last week
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,521Feb 6, 2026Updated last month
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆397Updated this week
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,097Updated this week
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆39,597Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- A QoS-based scheduling system brings optimal layout and status to workloads such as microservices, web services, big data jobs, AI jobs, …☆1,665Mar 16, 2026Updated last week
- A minimal Python framework for building custom AI inference servers with full control over logic, batching, and scaling.☆3,823Updated this week
- A lightweight data processing framework built on DuckDB and 3FS.☆4,938Mar 5, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,711Updated this week
- Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!☆8,915Updated this week
- An invoice generator app built using Next.js, Typescript, and Shadcn☆6,193Updated this week
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week