vllm-project / routerLinks
A high-performance and light-weight router for vLLM large scale deployment
☆112Updated this week
Alternatives and similar repositories for router
Users that are interested in router are comparing it to the libraries listed below
Sorting:
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆773Updated this week
- Efficient and easy multi-instance LLM serving☆524Updated 5 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆228Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆391Updated this week
- KV cache store for distributed LLM inference☆390Updated 2 months ago
- ☆342Updated 2 weeks ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- torchcomms: a modern PyTorch communications API☆330Updated this week
- A low-latency & high-throughput serving engine for LLMs☆470Updated last month
- Perplexity GPU Kernels☆560Updated 3 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆683Updated this week
- Perplexity open source garden for inference technology☆362Updated last month
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- The driver for LMCache core to run in vLLM☆60Updated last year
- Materials for learning SGLang☆738Updated last month
- Offline optimization of your disaggregated Dynamo graph☆184Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- Fast and memory-efficient exact attention☆114Updated this week
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆131Updated last month
- ☆206Updated 9 months ago
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆564Updated 2 weeks ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆462Updated last month
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 3 weeks ago
- Toolchain built around the Megatron-LM for Distributed Training☆86Updated 2 months ago
- KV cache compression for high-throughput LLM inference☆153Updated last year
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Updated last week
- ☆96Updated 10 months ago
- A throughput-oriented high-performance serving framework for LLMs☆945Updated 3 months ago