BienLuky / CacheQuant
[CVPR 2025] The official implementation of "CacheQuant: Comprehensively Accelerated Diffusion Models"
☆20Updated last month
Alternatives and similar repositories for CacheQuant
Users that are interested in CacheQuant are comparing it to the libraries listed below
Sorting:
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆47Updated last month
- ☆78Updated last month
- ☆15Updated 2 months ago
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆35Updated 11 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆100Updated last month
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 4 months ago
- Official repository of InLine attention (NeurIPS 2024)☆46Updated 4 months ago
- Curated list of methods that focuses on improving the efficiency of diffusion models☆44Updated 10 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆103Updated 2 weeks ago
- [CVPR 2024 Highlight] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Mo…☆63Updated 9 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆102Updated 10 months ago
- [AAAI-2025] The offical code for SiTo (Similarity-based Token Pruning for Stable Diffusion Models)☆27Updated 3 months ago
- (ToCa-v2) A New version of ToCa,with faster speed and better acceleration!☆33Updated 2 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆41Updated last month
- Generate one 2K image on single 3090 GPU!☆31Updated last month
- Accelerating Diffusion Transformers with Token-wise Feature Caching☆137Updated 2 months ago
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated 6 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆73Updated 4 months ago
- ☆165Updated 4 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆74Updated 5 months ago
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated 8 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆45Updated 10 months ago
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆21Updated 7 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆48Updated 8 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆43Updated 3 months ago
- Adapting LLaMA Decoder to Vision Transformer☆28Updated 11 months ago
- ImageGen-CoT: Enhancing Text-to-Image In-context Learning with Chain-of-Thought Reasoning☆32Updated last month
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆147Updated 6 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆31Updated 3 months ago
- The official implementation of "EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models"☆12Updated last year