zyxxmu / camView external linksLinks
Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference
☆49Jun 19, 2024Updated last year
Alternatives and similar repositories for cam
Users that are interested in cam are comparing it to the libraries listed below
Sorting:
- ☆302Jul 10, 2025Updated 7 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆44Aug 14, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆58Mar 26, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Jul 12, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆155Feb 20, 2025Updated 11 months ago
- ☆15Jun 4, 2024Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆148Aug 9, 2024Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆129Nov 26, 2025Updated 2 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆658Sep 30, 2025Updated 4 months ago
- ☆20Jun 3, 2023Updated 2 years ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆27Jul 7, 2025Updated 7 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆356Nov 20, 2025Updated 2 months ago
- Long Context Extension and Generalization in LLMs☆62Sep 21, 2024Updated last year
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆24Oct 8, 2024Updated last year
- ☆11Apr 3, 2023Updated 2 years ago
- FPGA-based HyperLogLog Accelerator☆12Jul 13, 2020Updated 5 years ago
- ☆49Nov 25, 2024Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆412Mar 3, 2025Updated 11 months ago
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆122Jan 1, 2026Updated last month
- Fast and memory-efficient exact attention☆18Jan 23, 2026Updated 3 weeks ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 10 months ago
- ☆36Oct 16, 2025Updated 3 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Nov 29, 2024Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Jul 10, 2024Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆502Aug 1, 2024Updated last year
- ☆18Oct 14, 2024Updated last year
- Source code for "Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference." In NeurIPS 2024☆21Dec 1, 2024Updated last year
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Oct 21, 2024Updated last year
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12May 23, 2025Updated 8 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Nov 23, 2024Updated last year
- SmartNIC☆14Dec 13, 2018Updated 7 years ago
- This is an official GitHub repository for the paper, "Towards timeout-less transport in commodity datacenter networks.".☆16Oct 12, 2021Updated 4 years ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,183Sep 30, 2025Updated 4 months ago
- Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)☆63Feb 13, 2024Updated 2 years ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆155Jan 14, 2026Updated last month
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year