cat538 / SKVQView external linksLinks
[COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models
☆25Oct 5, 2024Updated last year
Alternatives and similar repositories for SKVQ
Users that are interested in SKVQ are comparing it to the libraries listed below
Sorting:
- ☆40Mar 28, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆356Nov 20, 2025Updated 2 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Jul 12, 2024Updated last year
- Residual vector quantization for KV cache compression in large language model☆11Oct 22, 2024Updated last year
- Binary Neural Network-based COVID-19 Face-Mask Wear and Positioning Predictor on Edge Devices☆12Jul 1, 2021Updated 4 years ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆32Mar 30, 2025Updated 10 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- An implementation of parameter server framework in PyTorch RPC.☆12Nov 12, 2021Updated 4 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆404Aug 13, 2024Updated last year
- ☆15Sep 24, 2023Updated 2 years ago
- ☆38Aug 7, 2025Updated 6 months ago
- ☆20Nov 12, 2025Updated 3 months ago
- ☆22Mar 7, 2025Updated 11 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆49Nov 25, 2024Updated last year
- ☆20Sep 28, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆55Mar 27, 2024Updated last year
- ☆30Oct 4, 2025Updated 4 months ago
- ☆20Jul 7, 2017Updated 8 years ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆80Dec 18, 2025Updated last month
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆29Jun 30, 2025Updated 7 months ago
- ☆129Jun 6, 2025Updated 8 months ago
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆372Feb 14, 2025Updated 11 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Mar 7, 2024Updated last year
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆58Mar 26, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- Matrix multiplication on GPUs for matrices stored on a CPU. Similar to cublasXt, but ported to both NVIDIA and AMD GPUs.☆32Apr 2, 2025Updated 10 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year
- ☆303Jul 10, 2025Updated 7 months ago
- QJL: 1-Bit Quantized JL transform for KV Cache Quantization with Zero Overhead☆31Jan 27, 2025Updated last year
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Jul 11, 2024Updated last year
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆82Dec 7, 2025Updated 2 months ago
- xKV: Cross-Layer SVD for KV-Cache Compression☆44Nov 30, 2025Updated 2 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 8 months ago
- ☆85Jan 23, 2025Updated last year
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Mar 10, 2024Updated last year
- Simplification of pruned models for accelerated inference | SoftwareX https://doi.org/10.1016/j.softx.2021.100907☆36Feb 25, 2025Updated 11 months ago