ThisisBillhe / ZipCache
[NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification
☆21Updated last month
Alternatives and similar repositories for ZipCache
Users that are interested in ZipCache are comparing it to the libraries listed below
Sorting:
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆47Updated last month
- The official code implementation of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆39Updated last month
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆38Updated last month
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆146Updated last month
- 📚 Collection of token reduction for model compression resources.☆53Updated 2 weeks ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆92Updated last month
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆159Updated 7 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆48Updated 8 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆43Updated 3 months ago
- Code release for VTW (AAAI 2025) Oral☆39Updated 3 months ago
- ☆164Updated 4 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆54Updated 4 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆93Updated 6 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆73Updated 4 months ago
- SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆19Updated 7 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆74Updated 5 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆102Updated last week
- DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆43Updated last month
- qwen-nsa☆60Updated last month
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆102Updated 9 months ago
- ☆9Updated 8 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆89Updated 3 months ago
- Code implementation of GPTQv2 (https://arxiv.org/abs/2504.02692)☆36Updated 3 weeks ago
- [CVPR 2024 Highlight] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Mo…☆63Updated 9 months ago
- A sparse attention kernel supporting mix sparse patterns☆202Updated 3 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆45Updated 10 months ago
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆11Updated 3 months ago
- ☆29Updated 3 weeks ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆66Updated 6 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year