Cost-efficient and pluggable Infrastructure components for GenAI inference
☆4,765Apr 29, 2026Updated this week
Alternatives and similar repositories for aibrix
Users that are interested in aibrix are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Datacenter Scale Distributed Inference Serving Framework☆6,701Updated this week
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,312Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,832Updated this week
- Gateway API Inference Extension☆660Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆712Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Supercharge Your LLM with the Fastest KV Cache Layer☆8,187Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,385Updated this week
- A toolkit to run Ray applications on Kubernetes☆2,476Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆1,011Updated this week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, Slurm, 20+ cl…☆9,923Updated this week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,395Apr 24, 2026Updated last week
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,847Mar 30, 2026Updated last month
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- KV cache store for distributed LLM inference☆410Nov 13, 2025Updated 5 months ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,487Apr 27, 2026Updated last week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆3,107Updated this week
- Heterogeneous GPU Sharing on Kubernetes☆3,386Updated this week
- DeepEP: an efficient expert-parallel communication library☆9,589Updated this week
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,985May 15, 2025Updated 11 months ago
- Open GenAI Stack☆8,364Updated this week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,191Mar 31, 2026Updated last month
- Web UI for training and running open models like Gemma 4, Qwen3.6, DeepSeek, gpt-oss locally.☆63,070Apr 27, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A Cloud Native Batch System (Project under CNCF)☆5,530Updated this week
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,614Updated this week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,245Apr 27, 2026Updated last week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆435Updated this week
- Large Language Model Text Generation Inference☆10,848Mar 21, 2026Updated last month
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Framework☆21,046Updated this week
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆45,153Updated this week
- A QoS-based scheduling system brings optimal layout and status to workloads such as microservices, web services, big data jobs, AI jobs, …☆1,678Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆3,169Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A minimal Python framework for building custom AI inference servers with full control over logic, batching, and scaling.☆3,878Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,823Apr 26, 2026Updated last week
- A lightweight data processing framework built on DuckDB and 3FS.☆4,951Mar 5, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!☆9,215Updated this week
- An invoice generator app built using Next.js, Typescript, and Shadcn☆6,212Apr 16, 2026Updated 2 weeks ago
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month