KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches. EMNLP Findings 2024
☆90Feb 27, 2025Updated last year
Alternatives and similar repositories for longctx_bench
Users that are interested in longctx_bench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the repo for remote direct memory introspection.☆24Jun 21, 2023Updated 2 years ago
- ☆22Oct 3, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- Decoding Tree Sketching (DTS): a training-free & model agonistic & plug-in framework for LLM parallel reasoning.☆68Apr 2, 2026Updated 2 weeks ago
- LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting☆109Sep 8, 2025Updated 7 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [COLM'25] A Controlled Study on Long Context Extension and Generalization in LLMs☆64Mar 9, 2026Updated last month
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆689Updated this week
- ☆309Jul 10, 2025Updated 9 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 9 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆59Nov 20, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- DiscoverPath, a KG-based re- trieval system designed for biomedical research. This system aims to assist biomedical researchers in dynami…☆28Oct 25, 2023Updated 2 years ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated 2 years ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆253Dec 16, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆28Feb 23, 2026Updated last month
- The Official Implementation of Ada-KV [NeurIPS 2025]☆131Nov 26, 2025Updated 4 months ago
- ☆41Oct 16, 2025Updated 6 months ago
- ☆17Aug 1, 2025Updated 8 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆54Oct 29, 2024Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆237Aug 2, 2024Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Apr 8, 2026Updated last week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 6 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- We have created a new Github repository. Please visit https://github.com/ynchuang/DiscoverPath for the latest update.☆17Sep 3, 2023Updated 2 years ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,148Jan 15, 2025Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆512Aug 1, 2024Updated last year
- ☆16Feb 7, 2023Updated 3 years ago
- Marathon: A Multiple-choice Long Context Evaluation Benchmark for Large Language Models.☆10May 16, 2024Updated last year
- ☆11Jan 19, 2021Updated 5 years ago
- KV cache compression for high-throughput LLM inference☆156Feb 5, 2025Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆181Jul 12, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆16Sep 18, 2025Updated 7 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆294May 1, 2025Updated 11 months ago
- ☆105Sep 9, 2024Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆32Apr 9, 2025Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆250Sep 12, 2025Updated 7 months ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆27Jul 7, 2025Updated 9 months ago