LMCache / lmcache-vllmLinks
The driver for LMCache core to run in vLLM
☆58Updated 11 months ago
Alternatives and similar repositories for lmcache-vllm
Users that are interested in lmcache-vllm are comparing it to the libraries listed below
Sorting:
- KV cache store for distributed LLM inference☆384Updated 2 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆744Updated this week
- LLM Serving Performance Evaluation Harness☆82Updated 10 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆251Updated this week
- ☆31Updated 8 months ago
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆123Updated 3 weeks ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated 2 weeks ago
- Efficient and easy multi-instance LLM serving☆520Updated 4 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆190Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆368Updated last week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆355Updated this week
- ☆60Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆216Updated 3 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- ☆96Updated 9 months ago
- Stateful LLM Serving☆93Updated 10 months ago
- ☆48Updated last year
- Fast and memory-efficient exact attention☆108Updated 3 weeks ago
- A high-performance and light-weight router for vLLM large scale deployment☆80Updated 2 weeks ago
- Offline optimization of your disaggregated Dynamo graph☆146Updated this week
- A low-latency & high-throughput serving engine for LLMs☆464Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆206Updated last year
- ☆81Updated 2 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆166Updated this week
- ☆56Updated last year
- Modular and structured prompt caching for low-latency LLM inference☆109Updated last year
- KV cache compression for high-throughput LLM inference☆148Updated 11 months ago
- Perplexity open source garden for inference technology☆324Updated 2 weeks ago
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆484Updated last month