☆84Oct 9, 2024Updated last year
Alternatives and similar repositories for Awesome-KV-Cache
Users that are interested in Awesome-KV-Cache are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Jul 11, 2024Updated last year
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆310Dec 5, 2025Updated 4 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Jul 17, 2024Updated last year
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆31Oct 24, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆23May 21, 2025Updated 10 months ago
- ☆28May 24, 2025Updated 10 months ago
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Feb 15, 2024Updated 2 years ago
- ☆23Mar 7, 2025Updated last year
- InstAttention: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference☆16Mar 30, 2025Updated last year
- ☆14Oct 3, 2024Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆427Mar 3, 2025Updated last year
- Go package implementing an indexable ordered multimap☆25Feb 3, 2019Updated 7 years ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆512Aug 1, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Dynamic Context Selection for Efficient Long-Context LLMs☆56May 20, 2025Updated 10 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆52Dec 17, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆78Oct 16, 2024Updated last year
- ☆20Jun 9, 2025Updated 10 months ago
- ☆21Feb 5, 2024Updated 2 years ago
- [NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models☆18Dec 6, 2023Updated 2 years ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated 2 years ago
- ☆53May 13, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆18Nov 4, 2025Updated 5 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- ☆47Nov 25, 2024Updated last year
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆200Apr 8, 2026Updated last week
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆46Dec 9, 2023Updated 2 years ago
- 1st Place Team Crane: @aswinkumar1999 @rathull @kyolebu☆30Sep 8, 2025Updated 7 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Nov 9, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆253Dec 16, 2024Updated last year
- [ICLR2025] Code and data for paper: Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasonin…☆42Mar 10, 2025Updated last year
- An implementation of LazyLLM token pruning for LLaMa 2 model family.☆13Jan 6, 2025Updated last year
- Understanding Differencing Algorithms for Mobile Application Updates☆15Nov 15, 2023Updated 2 years ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆59Nov 20, 2024Updated last year
- ☆84Apr 18, 2025Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Apr 8, 2026Updated last week