Modular and structured prompt caching for low-latency LLM inference
☆109Nov 9, 2024Updated last year
Alternatives and similar repositories for prompt-cache
Users that are interested in prompt-cache are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- An experimentation platform for LLM inference optimisation☆36Sep 19, 2024Updated last year
- ☆169Jul 15, 2025Updated 8 months ago
- ☆13Nov 1, 2021Updated 4 years ago
- Efficient and easy multi-instance LLM serving☆536Mar 12, 2026Updated 2 weeks ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers☆48Jun 4, 2024Updated last year
- Official implementation of "TailorKV: A Hybrid Framework for Long-Context Inference via Tailored KV Cache Optimization" (Findings of ACL …☆21Jul 25, 2025Updated 8 months ago
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆27Feb 26, 2026Updated last month
- ☆15Aug 19, 2024Updated last year
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆17Aug 21, 2023Updated 2 years ago
- ☆20Jun 1, 2025Updated 9 months ago
- [VLDB'23] A Skew-Resistant Index for Processing-in-Memory☆27Jan 5, 2026Updated 2 months ago
- ☆20Jun 9, 2025Updated 9 months ago
- ☆311Jul 10, 2025Updated 8 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- The code based on vLLM for the paper “ Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention”.☆11Sep 19, 2024Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Jul 17, 2024Updated last year
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- ☆16Apr 15, 2025Updated 11 months ago
- Artifacts of EuroSys'24 paper "Exploring Performance and Cost Optimization with ASIC-Based CXL Memory"☆31Feb 21, 2024Updated 2 years ago
- LITS: An Optimized Learned Index for Strings☆13Jun 18, 2025Updated 9 months ago
- ☆20Apr 18, 2024Updated last year
- ☆99Nov 25, 2024Updated last year
- 100行解决中文模糊实体识别with字典树和编辑距离 Chinese fuzzy entity matching with prefix tree and distance editing☆11Sep 25, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆243Mar 19, 2026Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆792Apr 6, 2025Updated 11 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- 📀 Encoding YUV into H.246 & transmit it to rtmp server.☆10Dec 22, 2023Updated 2 years ago
- ☆33Oct 13, 2025Updated 5 months ago
- ☆14Apr 8, 2023Updated 2 years ago
- Implementation of the paper: Selective_Backpropagation from paper Accelerating Deep Learning by Focusing on the Biggest Losers☆15Feb 2, 2020Updated 6 years ago
- ☆11Mar 3, 2024Updated 2 years ago
- How to plot for papers, slides, demos, etc.☆10Apr 7, 2022Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆293Dec 5, 2025Updated 3 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆83Dec 7, 2025Updated 3 months ago
- Preparing Machine Learning/Computer Vision environment for NVidia Jetson TX2☆16Jul 5, 2017Updated 8 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- [HotStorage'24 Best Paper] Can Modern LLMs Tune and Configure LSM-based Key-Value Stores?☆27Nov 27, 2024Updated last year
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆17Jul 10, 2025Updated 8 months ago