Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference
☆49Jun 19, 2024Updated last year
Alternatives and similar repositories for cam
Users that are interested in cam are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆313Jul 10, 2025Updated 9 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated 2 years ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆150Aug 9, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆384Jul 10, 2025Updated 9 months ago
- ☆14Jun 4, 2024Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆132Nov 26, 2025Updated 5 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆696Apr 15, 2026Updated 3 weeks ago
- ☆47Nov 25, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆155Feb 20, 2025Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆430Mar 3, 2025Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆183Jul 12, 2024Updated last year
- Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)☆62Feb 13, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆184Jul 10, 2024Updated last year
- ☆42Oct 16, 2025Updated 6 months ago
- Fast and memory-efficient exact attention☆22Apr 10, 2026Updated 3 weeks ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆514Aug 1, 2024Updated last year
- ☆19Jun 3, 2023Updated 2 years ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆390Nov 20, 2025Updated 5 months ago
- ☆37Oct 10, 2024Updated last year
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- [COLM'25] A Controlled Study on Long Context Extension and Generalization in LLMs☆65Mar 9, 2026Updated last month
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆22Oct 8, 2024Updated last year
- KV cache compression for high-throughput LLM inference☆157Feb 5, 2025Updated last year
- Source code for "Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference." In NeurIPS 2024☆21Dec 1, 2024Updated last year
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆157Jan 14, 2026Updated 3 months ago
- ☆19Oct 14, 2024Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,164Jan 15, 2025Updated last year
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12May 23, 2025Updated 11 months ago
- ☆15Aug 28, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Running inference on the ZeroSCROLLS benchmark☆22Apr 18, 2024Updated 2 years ago
- ☆10Aug 22, 2023Updated 2 years ago
- Robust Differentially Private Training of Deep Neural Networks☆12Dec 10, 2020Updated 5 years ago
- Introduction to Quantization☆20Mar 3, 2024Updated 2 years ago
- ☆49Mar 3, 2024Updated 2 years ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆382Sep 25, 2024Updated last year
- [VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆134Feb 22, 2026Updated 2 months ago