The driver for LMCache core to run in vLLM
☆63Feb 4, 2025Updated last year
Alternatives and similar repositories for lmcache-vllm
Users that are interested in lmcache-vllm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆169Jul 15, 2025Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Feb 11, 2026Updated last month
- vLLM performance dashboard☆43Apr 26, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,745Updated this week
- ☆154Oct 9, 2024Updated last year
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated 3 months ago
- A simple tool for parsing the profile.json file of mxnet☆14Aug 1, 2018Updated 7 years ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆75Nov 4, 2024Updated last year
- Manages vllm-nccl dependency☆17Jun 3, 2024Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆945Mar 20, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆536Mar 12, 2026Updated 2 weeks ago
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- ☆18Mar 4, 2025Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆532Feb 10, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆28Apr 17, 2025Updated 11 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆151Dec 23, 2025Updated 3 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆219Aug 1, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆792Apr 6, 2025Updated 11 months ago
- llm deploy project based onnx.☆50Oct 9, 2024Updated last year
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- ☆10Nov 21, 2023Updated 2 years ago
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- ☆155Mar 4, 2025Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated last year
- ☆39Oct 16, 2025Updated 5 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- C++ RPC based on RDMA☆13Sep 12, 2023Updated 2 years ago
- ☆28Jul 29, 2025Updated 7 months ago
- Self-host LLMs with LMDeploy and BentoML☆22Dec 26, 2025Updated 3 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 5 months ago
- Benchmarking the serving capabilities of vLLM☆59Aug 20, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A curated list for Efficient Large Language Models☆11Mar 25, 2024Updated 2 years ago
- MPI Code Generation through Domain-Specific Language Models☆15Nov 19, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- 用于研读LevelDB源码时进行注释,持续更新☆12Feb 23, 2023Updated 3 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- clustering algorithm implementation☆13Nov 3, 2025Updated 4 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Sep 18, 2025Updated 6 months ago