abdelfattah-lab / xKVView external linksLinks
xKV: Cross-Layer SVD for KV-Cache Compression
☆43Nov 30, 2025Updated 2 months ago
Alternatives and similar repositories for xKV
Users that are interested in xKV are comparing it to the libraries listed below
Sorting:
- ☆27Nov 25, 2025Updated 2 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆155Feb 20, 2025Updated 11 months ago
- Code and resources for the NeurIPS 2025 Paper "BMMR: A Large-Scale Bilingual Multimodal Multi-Discipline Reasoning Dataset" by Zhiheng X…☆19Oct 14, 2025Updated 4 months ago
- ☆14Jan 24, 2025Updated last year
- ☆49Nov 25, 2024Updated last year
- [ICML'25] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi…☆14Jun 6, 2025Updated 8 months ago
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆67Jun 19, 2025Updated 7 months ago
- ☆19Jun 1, 2025Updated 8 months ago
- ☆15Apr 11, 2024Updated last year
- ☆46Sep 27, 2025Updated 4 months ago
- A comprehensive and efficient long-context model evaluation framework☆30Updated this week
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 9 months ago
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 9 months ago
- ☆20Oct 13, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆32Nov 28, 2025Updated 2 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Dec 20, 2024Updated last year
- Algorithms for approximate attention in LLMs☆21Apr 14, 2025Updated 10 months ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22May 24, 2023Updated 2 years ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆21Oct 10, 2024Updated last year
- [ACM MM25] LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models☆23Mar 29, 2025Updated 10 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆21Feb 17, 2025Updated 11 months ago
- ☆22Mar 7, 2025Updated 11 months ago
- ☆19Sep 24, 2025Updated 4 months ago
- Official implementation of ECCV24 paper: POA☆24Aug 8, 2024Updated last year
- ☆18Sep 5, 2024Updated last year
- ☆40May 27, 2025Updated 8 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆25Oct 5, 2024Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 6 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 4 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 2 months ago
- Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity (ACL 2025, oral)☆28Jun 14, 2025Updated 8 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆61Oct 2, 2025Updated 4 months ago
- [ICLR 2026] SparseD: Sparse Attention for Diffusion Language Models☆57Oct 7, 2025Updated 4 months ago
- ☆33Nov 18, 2025Updated 2 months ago
- ☆73Dec 16, 2025Updated last month
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆73Jul 14, 2025Updated 7 months ago