LMCache / lmcache-vllmView external linksLinks
The driver for LMCache core to run in vLLM
☆60Feb 4, 2025Updated last year
Alternatives and similar repositories for lmcache-vllm
Users that are interested in lmcache-vllm are comparing it to the libraries listed below
Sorting:
- ☆164Jul 15, 2025Updated 6 months ago
- vLLM performance dashboard☆41Apr 26, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Feb 6, 2026Updated last week
- A simple tool for parsing the profile.json file of mxnet☆14Aug 1, 2018Updated 7 years ago
- ☆151Oct 9, 2024Updated last year
- clustering algorithm implementation☆13Nov 3, 2025Updated 3 months ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆6,871Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆147Dec 23, 2025Updated last month
- Benchmarking the serving capabilities of vLLM☆59Aug 20, 2024Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Sep 4, 2024Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆69Nov 4, 2024Updated last year
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated last month
- Stateful LLM Serving☆95Mar 11, 2025Updated 11 months ago
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Sep 18, 2025Updated 4 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- ☆23Feb 6, 2026Updated last week
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU☆50Oct 6, 2024Updated last year
- ☆27Apr 17, 2025Updated 9 months ago
- Manages vllm-nccl dependency☆17Jun 3, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆776Apr 6, 2025Updated 10 months ago
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated 11 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆30Dec 13, 2024Updated last year
- Rust crate for some audio utilities☆27Mar 8, 2025Updated 11 months ago
- Library to interface Compilers and ML models for ML-Enabled Compiler Optimizations☆20Oct 19, 2025Updated 3 months ago
- A low-latency & high-throughput serving engine for LLMs☆470Jan 8, 2026Updated last month
- Self-host LLMs with LMDeploy and BentoML☆22Dec 26, 2025Updated last month
- Statically and dynamically inspect tool for TensorFlow models☆24Nov 11, 2018Updated 7 years ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- ☆155Mar 4, 2025Updated 11 months ago
- Batch-scheduler based on K8s scheduling framework, related features have contributed to scheduler-plugins(Deprecated).☆25Aug 6, 2020Updated 5 years ago
- KV cache compression for high-throughput LLM inference☆153Feb 5, 2025Updated last year
- A machine learning competition in Automated Deep Learning (AutoDL), co-organized by ChaLearn, Google and 4Paradigm. Accepted at NeurIPS 2…☆22Dec 10, 2020Updated 5 years ago
- ☆21Apr 17, 2025Updated 9 months ago
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- ☆96Dec 6, 2024Updated last year
- FlagCX is a scalable and adaptive cross-chip communication library.☆172Feb 6, 2026Updated last week