ThisisBillhe / ZipCacheLinks
[NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification
☆23Updated 4 months ago
Alternatives and similar repositories for ZipCache
Users that are interested in ZipCache are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆132Updated this week
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆48Updated 5 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆50Updated 4 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆51Updated 4 months ago
- ☆89Updated 2 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆54Updated 11 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆213Updated last month
- ☆171Updated 6 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆320Updated this week
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆165Updated 10 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆49Updated last month
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆103Updated 3 weeks ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆63Updated 4 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆109Updated 4 months ago
- [ICCV'25] The official code implementation of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Langua…☆51Updated this week
- A sparse attention kernel supporting mix sparse patterns☆262Updated 5 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆98Updated 8 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆79Updated 7 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆121Updated 3 weeks ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆110Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆221Updated last month
- [ICML 2025 Oral] Mixture of Lookup Experts☆45Updated 2 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆145Updated 3 weeks ago
- ☆14Updated 4 months ago
- ☆10Updated 11 months ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆43Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- 📚 Collection of token-level model compression resources.☆147Updated last month
- ScaleKV: Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression☆46Updated 2 months ago