ThisisBillhe / ZipCache
[NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification
☆19Updated 7 months ago
Alternatives and similar repositories for ZipCache:
Users that are interested in ZipCache are comparing it to the libraries listed below
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆46Updated 2 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆67Updated this week
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆149Updated 5 months ago
- ☆9Updated 6 months ago
- The official code implementation of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆37Updated last month
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆35Updated last week
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆78Updated last month
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆35Updated last month
- Code release for VTW (AAAI 2025) Oral☆32Updated 2 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆43Updated 6 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆102Updated this week
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆35Updated last year
- SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆17Updated 5 months ago
- ☆153Updated 2 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated 11 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆33Updated 9 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆44Updated 2 months ago
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆26Updated 4 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆63Updated 11 months ago
- A sparse attention kernel supporting mix sparse patterns☆168Updated last month
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆92Updated 4 months ago
- [CVPR 2024 Highlight] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Mo…☆61Updated 7 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆67Updated 2 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆34Updated 7 months ago
- Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆80Updated 2 weeks ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆121Updated last month
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆52Updated last month
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆164Updated last month
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆84Updated last week
- DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆35Updated this week