Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference
☆48Jun 19, 2024Updated last year
Alternatives and similar repositories for cam
Users that are interested in cam are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆311Jul 10, 2025Updated 8 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated 2 years ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆27Jul 7, 2025Updated 8 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆149Aug 9, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆381Jul 10, 2025Updated 8 months ago
- ☆14Jun 4, 2024Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 4 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆678Feb 24, 2026Updated last month
- ☆47Nov 25, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆155Feb 20, 2025Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆419Mar 3, 2025Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)☆62Feb 13, 2024Updated 2 years ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- ☆39Oct 16, 2025Updated 5 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆507Aug 1, 2024Updated last year
- Fast and memory-efficient exact attention☆21Mar 13, 2026Updated 2 weeks ago
- ☆20Jun 3, 2023Updated 2 years ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆363Nov 20, 2025Updated 4 months ago
- ☆36Oct 10, 2024Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Mar 9, 2026Updated 2 weeks ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- [COLM'25] A Controlled Study on Long Context Extension and Generalization in LLMs☆64Mar 9, 2026Updated 2 weeks ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆23Oct 8, 2024Updated last year
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- Source code for "Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference." In NeurIPS 2024☆21Dec 1, 2024Updated last year
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆157Jan 14, 2026Updated 2 months ago
- ☆19Oct 14, 2024Updated last year
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12May 23, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆14Aug 28, 2024Updated last year
- Running inference on the ZeroSCROLLS benchmark☆20Apr 18, 2024Updated last year
- ☆10Aug 22, 2023Updated 2 years ago
- ☆84Nov 10, 2025Updated 4 months ago
- Look Back to Reason Forward: Revisitable Memory for Long-Context LLM Agents☆27Mar 9, 2026Updated 2 weeks ago
- Introduction to Quantization☆20Mar 3, 2024Updated 2 years ago
- ☆49Mar 3, 2024Updated 2 years ago