ModelEngine-Group / unified-cache-managementLinks
Persist and reuse KV Cache to speedup your LLM.
☆244Updated this week
Alternatives and similar repositories for unified-cache-management
Users that are interested in unified-cache-management are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- Offline optimization of your disaggregated Dynamo graph☆168Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆772Updated 9 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆760Updated 2 weeks ago
- ☆145Updated this week
- SGLang kernel library for NPU☆95Updated last week
- High performance Transformer implementation in C++.☆148Updated last year
- vLLM Kunlun (vllm-kunlun) is a community-maintained hardware plugin designed to seamlessly run vLLM on the Kunlun XPU.☆239Updated this week
- DeepSeek-V3/R1 inference performance simulator☆177Updated 10 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- ☆522Updated last week
- SGLang is a fast serving framework for large language models and vision language models.☆27Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆844Updated this week
- NVIDIA NCCL Tests for Distributed Training☆133Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,031Updated last week
- Materials for learning SGLang☆728Updated 3 weeks ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Updated 2 years ago
- ☆340Updated 3 weeks ago
- FlagScale is a large model toolkit based on open-sourced projects.☆468Updated last week
- A workload for deploying LLM inference services on Kubernetes☆160Updated last week
- High Performance LLM Inference Operator Library☆222Updated last week
- Injecting Adrenaline into LLM Serving: Boosting Resource Utilization and Throughput via Attention Disaggregation☆40Updated 2 months ago
- This repository organizes materials, recordings, and schedules related to AI-infra learning meetings.☆312Updated 3 weeks ago
- ☆77Updated last year
- ☆73Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆659Updated this week
- LMCache on Ascend☆45Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆260Updated this week