xKV: Cross-Layer SVD for KV-Cache Compression
☆48Nov 30, 2025Updated 4 months ago
Alternatives and similar repositories for xKV
Users that are interested in xKV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆154Feb 20, 2025Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- ☆27Nov 25, 2025Updated 4 months ago
- ☆47Nov 25, 2024Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆22Jun 1, 2025Updated 10 months ago
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆67Jun 19, 2025Updated 9 months ago
- Algorithms for approximate attention in LLMs☆22Apr 14, 2025Updated last year
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 11 months ago
- Code and resources for the NeurIPS 2025 Paper "BMMR: A Large-Scale Bilingual Multimodal Multi-Discipline Reasoning Dataset" by Zhiheng X…☆19Oct 14, 2025Updated 6 months ago
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆35Nov 28, 2025Updated 4 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22May 24, 2023Updated 2 years ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆633Sep 11, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- [ICML'25] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi…☆15Jun 6, 2025Updated 10 months ago
- Universal data IO and neural network modules in NLP tasks.☆18Updated this week
- Pruning methods for pytorch with an optimizer-like interface☆15Apr 14, 2020Updated 6 years ago
- Verilog bit slicing for python☆11May 13, 2021Updated 4 years ago
- ☆46Sep 27, 2025Updated 6 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆291May 1, 2025Updated 11 months ago
- ☆20Oct 13, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 10 months ago
- ☆13Nov 29, 2024Updated last year
- ☆17Feb 3, 2023Updated 3 years ago
- AnyDSL traversal code☆15Feb 18, 2019Updated 7 years ago
- ☆15Apr 11, 2024Updated 2 years ago
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Jun 14, 2024Updated last year
- Optimize the order of execution for tf.einsum☆14May 31, 2017Updated 8 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆418Aug 13, 2024Updated last year
- ☆24Updated this week
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆167Jun 22, 2025Updated 9 months ago
- ECNU NLP group learns CS224n in the form of seminars in the 2017 summer.☆10Aug 12, 2017Updated 8 years ago
- ☆16Jul 23, 2024Updated last year
- Compressing Large Language Models using Low Precision and Low Rank Decomposition☆108Nov 24, 2025Updated 4 months ago
- Easy-to-use Retrieval-Enhanced Transformer implementation☆10Sep 30, 2022Updated 3 years ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 6 months ago
- NLPCC-2025 Shared-Task 1: LLM-Generated Text Detection☆15Apr 6, 2026Updated last week