LMCache / lmcache-vllmLinks
The driver for LMCache core to run in vLLM
☆60Updated 11 months ago
Alternatives and similar repositories for lmcache-vllm
Users that are interested in lmcache-vllm are comparing it to the libraries listed below
Sorting:
- LLM Serving Performance Evaluation Harness☆83Updated 11 months ago
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆129Updated last month
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆260Updated this week
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆384Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆765Updated 3 weeks ago
- ☆31Updated 9 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 4 months ago
- Toolchain built around the Megatron-LM for Distributed Training☆84Updated last month
- ☆48Updated last year
- Fast and memory-efficient exact attention☆111Updated this week
- A high-performance and light-weight router for vLLM large scale deployment☆95Updated last week
- Efficient and easy multi-instance LLM serving☆523Updated 5 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- ☆96Updated 10 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- Stateful LLM Serving☆95Updated 10 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆225Updated 3 weeks ago
- A low-latency & high-throughput serving engine for LLMs☆471Updated 3 weeks ago
- Perplexity GPU Kernels☆554Updated 2 months ago
- ☆56Updated last year
- Modular and structured prompt caching for low-latency LLM inference☆110Updated last year
- ☆84Updated 3 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆170Updated this week
- ☆61Updated last year
- KV cache compression for high-throughput LLM inference☆151Updated 11 months ago
- Offline optimization of your disaggregated Dynamo graph☆177Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆220Updated this week
- ☆73Updated 4 months ago