xKV: Cross-Layer SVD for KV-Cache Compression
☆49Nov 30, 2025Updated 5 months ago
Alternatives and similar repositories for xKV
Users that are interested in xKV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆155Feb 20, 2025Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- ☆27Nov 25, 2025Updated 5 months ago
- ☆47Nov 25, 2024Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆21Jun 1, 2025Updated 11 months ago
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆67Jun 19, 2025Updated 10 months ago
- Algorithms for approximate attention in LLMs☆22Apr 14, 2025Updated last year
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated last year
- Code and resources for the NeurIPS 2025 Paper "BMMR: A Large-Scale Bilingual Multimodal Multi-Discipline Reasoning Dataset" by Zhiheng X…☆19Oct 14, 2025Updated 6 months ago
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆36Nov 28, 2025Updated 5 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- ☆22Apr 2, 2023Updated 3 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22May 24, 2023Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆390Nov 20, 2025Updated 5 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆641Sep 11, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- [ICML'25] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi…☆15Jun 6, 2025Updated 11 months ago
- Universal data IO and neural network modules in NLP tasks.☆18Apr 13, 2026Updated 3 weeks ago
- ☆46Sep 27, 2025Updated 7 months ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Jun 25, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆296May 1, 2025Updated last year
- ☆20Oct 13, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 11 months ago
- ☆13Nov 29, 2024Updated last year
- ☆17Feb 3, 2023Updated 3 years ago
- ☆12Jul 4, 2020Updated 5 years ago
- AnyDSL traversal code☆15Feb 18, 2019Updated 7 years ago
- ☆15Apr 11, 2024Updated 2 years ago
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Jun 14, 2024Updated last year
- Optimize the order of execution for tf.einsum☆14May 31, 2017Updated 8 years ago
- Command helper for slurm system. Act as if you are on compute node.☆16Feb 1, 2025Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆420Aug 13, 2024Updated last year
- ☆27Updated this week
- A model serving framework for various research and production scenarios. Seamlessly built upon the PyTorch and HuggingFace ecosystem.☆23Oct 11, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- Template for Makefile based SysY compiler projects.☆12Jun 16, 2022Updated 3 years ago
- I love Database.☆18Dec 25, 2024Updated last year
- Official repository Flash Local Linear Attention☆23Apr 23, 2026Updated last week