vllm-project / production-stackLinks
vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization
☆1,778Updated this week
Alternatives and similar repositories for production-stack
Users that are interested in production-stack are comparing it to the libraries listed below
Sorting:
- Supercharge Your LLM with the Fastest KV Cache Layer☆5,210Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,950Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,723Updated last week
- LLMPerf is a library for validating and benchmarking LLMs☆1,001Updated 9 months ago
- llm-d enables high-performance distributed LLM inference on Kubernetes☆1,755Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆4,997Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆577Updated this week
- Serverless LLM Serving for Everyone.☆543Updated this week
- Fast, Flexible and Portable Structured Generation☆1,233Updated this week
- Materials for learning SGLang☆572Updated 2 weeks ago
- Large Language Model (LLM) Systems Paper List☆1,495Updated 2 weeks ago
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,242Updated this week
- slime is a LLM post-training framework for RL Scaling.☆1,747Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆887Updated last month
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆3,967Updated this week
- The Triton TensorRT-LLM Backend☆887Updated last week
- My learning notes/codes for ML SYS.☆3,632Updated this week
- Nano vLLM☆6,553Updated 2 weeks ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,605Updated this week
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆4,518Updated last month
- Efficient and easy multi-instance LLM serving☆484Updated 2 weeks ago
- Minimalistic large language model 3D-parallelism training☆2,212Updated 2 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆687Updated 5 months ago
- Expert Parallelism Load Balancer☆1,265Updated 5 months ago
- NVIDIA Inference Xfer Library (NIXL)☆622Updated this week
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆778Updated last week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆269Updated this week
- Community maintained hardware plugin for vLLM on Ascend☆1,128Updated this week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,780Updated last week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,127Updated last month