LMCache / lmcache-vllmLinks
The driver for LMCache core to run in vLLM
☆41Updated 4 months ago
Alternatives and similar repositories for lmcache-vllm
Users that are interested in lmcache-vllm are comparing it to the libraries listed below
Sorting:
- LLM Serving Performance Evaluation Harness☆78Updated 3 months ago
- KV cache store for distributed LLM inference☆269Updated 2 weeks ago
- ☆28Updated 2 months ago
- Stateful LLM Serving☆73Updated 3 months ago
- ☆26Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆87Updated last month
- ☆47Updated 11 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆163Updated 9 months ago
- A low-latency & high-throughput serving engine for LLMs☆379Updated 3 weeks ago
- ☆12Updated 2 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆123Updated last year
- ☆62Updated last year
- ☆120Updated 5 months ago
- Efficient and easy multi-instance LLM serving☆437Updated this week
- ☆86Updated 2 months ago
- ☆109Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 6 months ago
- Modular and structured prompt caching for low-latency LLM inference☆96Updated 7 months ago
- ☆55Updated 9 months ago
- Perplexity GPU Kernels☆364Updated last week
- ☆37Updated 6 months ago
- ☆103Updated 7 months ago
- A lightweight design for computation-communication overlap.☆141Updated last week
- DeepSeek-V3/R1 inference performance simulator☆148Updated 2 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆42Updated last month
- PyTorch distributed training acceleration framework☆49Updated 4 months ago
- ☆54Updated 7 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated last month
- ☆38Updated 5 months ago