Modular and structured prompt caching for low-latency LLM inference
☆112Nov 9, 2024Updated last year
Alternatives and similar repositories for prompt-cache
Users that are interested in prompt-cache are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Stateful LLM Serving☆98Mar 11, 2025Updated last year
- An experimentation platform for LLM inference optimisation☆36Sep 19, 2024Updated last year
- ☆177Jul 15, 2025Updated 9 months ago
- ☆13Nov 1, 2021Updated 4 years ago
- Efficient and easy multi-instance LLM serving☆543Mar 12, 2026Updated last month
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- InstAttention: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference☆16Mar 30, 2025Updated last year
- Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers☆48Jun 4, 2024Updated last year
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆27Feb 26, 2026Updated last month
- ☆15Aug 19, 2024Updated last year
- ☆22Jun 1, 2025Updated 10 months ago
- ☆20Jun 9, 2025Updated 10 months ago
- The code based on vLLM for the paper “ Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention”.☆11Sep 19, 2024Updated last year
- ☆309Jul 10, 2025Updated 9 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Jul 17, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- ☆17Apr 15, 2025Updated last year
- Artifacts of EuroSys'24 paper "Exploring Performance and Cost Optimization with ASIC-Based CXL Memory"☆31Feb 21, 2024Updated 2 years ago
- LITS: An Optimized Learned Index for Strings☆13Jun 18, 2025Updated 10 months ago
- ☆20Apr 18, 2024Updated 2 years ago
- ☆99Nov 25, 2024Updated last year
- 100行解决中文模糊实体识别with字典树和编辑距离 Chinese fuzzy entity matching with prefix tree and distance editing☆11Sep 25, 2023Updated 2 years ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆252Mar 19, 2026Updated last month
- Disaggregated serving system for Large Language Models (LLMs).☆801Apr 6, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- 📀 Encoding YUV into H.246 & transmit it to rtmp server.☆10Dec 22, 2023Updated 2 years ago
- ☆14Apr 8, 2023Updated 3 years ago
- Implementation of the paper: Selective_Backpropagation from paper Accelerating Deep Learning by Focusing on the Biggest Losers☆15Feb 2, 2020Updated 6 years ago
- ☆12Mar 3, 2024Updated 2 years ago
- How to plot for papers, slides, demos, etc.☆10Apr 7, 2022Updated 4 years ago
- A low-latency & high-throughput serving engine for LLMs☆491Jan 8, 2026Updated 3 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆85Dec 7, 2025Updated 4 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆310Dec 5, 2025Updated 4 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- [HotStorage'24 Best Paper] Can Modern LLMs Tune and Configure LSM-based Key-Value Stores?☆27Nov 27, 2024Updated last year
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆18Jul 10, 2025Updated 9 months ago
- Set of datasets for the deep learning recommendation model (DLRM).☆49Dec 21, 2022Updated 3 years ago
- a fully learned index for larger-than-memory databases☆15Sep 17, 2022Updated 3 years ago
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Jul 6, 2023Updated 2 years ago
- ☆20Jun 1, 2023Updated 2 years ago