[ICLR2025] Code and data for paper: Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning
☆41Mar 10, 2025Updated last year
Alternatives and similar repositories for HeadKV
Users that are interested in HeadKV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Official Implementation of Ada-KV [NeurIPS 2025]☆131Nov 26, 2025Updated 4 months ago
- ☆37Mar 17, 2025Updated last year
- Efficient retrieval head analysis with triton flash attention that supports topK probability☆13Jun 15, 2024Updated last year
- ☆20Jun 17, 2024Updated last year
- Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision☆19Apr 1, 2025Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Jul 11, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 5 months ago
- ☆11May 24, 2024Updated last year
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆91Nov 29, 2025Updated 4 months ago
- ☆41Oct 16, 2025Updated 5 months ago
- ☆15Jul 31, 2025Updated 8 months ago
- [CVPR 2025] Efficient Personalization of Quantized Diffusion Model without Backpropagation☆16Mar 31, 2025Updated last year
- To mitigate position bias in LLMs, especially in long-context scenarios, we scale only one dimension of LLMs, reducing position bias and …☆11Jun 18, 2024Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Jul 24, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆154Feb 20, 2025Updated last year
- ☆13Mar 28, 2025Updated last year
- "Tail-Aware Sperm Analysis for Transparent Tracking of Spermatozoa" Official Implementation☆10Jan 21, 2026Updated 2 months ago
- ☆11May 19, 2025Updated 10 months ago
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 5 months ago
- Align, a general text alignment function☆15Dec 7, 2023Updated 2 years ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆535Feb 10, 2025Updated last year
- simple implementation of Expected Gradients and Integrated Gradients by pytorch☆12May 11, 2022Updated 3 years ago
- ☆52May 13, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆149Aug 9, 2024Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆31Jul 4, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆58Nov 20, 2024Updated last year
- ☆47Nov 25, 2024Updated last year
- ☆22Sep 16, 2025Updated 6 months ago
- ☆11Sep 10, 2023Updated 2 years ago
- fork of karparthy's nanogpt with custom datasets☆10Jul 25, 2023Updated 2 years ago
- ☆13Jan 7, 2025Updated last year
- ☆22Oct 10, 2025Updated 6 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆14Oct 3, 2024Updated last year
- Code and Dataset release of "Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models" (NAACL 2024)☆10Oct 16, 2024Updated last year
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆55Jul 16, 2024Updated last year
- AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence☆10Mar 2, 2025Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Updated this week
- ☆239Nov 19, 2025Updated 4 months ago
- Source code of “Reinforcement Learning with Token-level Feedback for Controllable Text Generation (NAACL 2024)☆17Dec 8, 2024Updated last year