BienLuky / CacheQuantLinks
[CVPR 2025] The official implementation of "CacheQuant: Comprehensively Accelerated Diffusion Models"
☆42Updated 2 months ago
Alternatives and similar repositories for CacheQuant
Users that are interested in CacheQuant are comparing it to the libraries listed below
Sorting:
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆206Updated 10 months ago
- Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆81Updated 6 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Updated 3 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆108Updated 4 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Updated 10 months ago
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆156Updated last month
- ☆191Updated last year
- ☆92Updated 10 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆164Updated last year
- (ToCa-v2) A New version of ToCa,with faster speed and better acceleration!☆39Updated 10 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆74Updated last year
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆52Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆146Updated 10 months ago
- Autoregressive Image Generation with Randomized Parallel Decoding☆84Updated 3 months ago
- [ICCV2025]Generate one 2K image on single 24GB 3090 GPU!☆83Updated 4 months ago
- 📚 Collection of awesome generation acceleration resources.☆383Updated 6 months ago
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated last year
- [NeurIPS 2025 Oral] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think☆240Updated 3 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆114Updated 6 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆134Updated last year
- Code for Draft Attention☆99Updated 8 months ago
- Towards Scalable Pre-training of Visual Tokenizers for Generation☆428Updated last month
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆67Updated 2 weeks ago
- The official implementation of "Sparse-vDiT: Unleashing the Power of Sparse Attention to Accelerate Video Diffusion Transformers" (arXiv …☆50Updated 7 months ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers☆360Updated 5 months ago
- ☆80Updated 3 months ago
- [CVPR 2025] DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention☆177Updated 10 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆100Updated 6 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆93Updated last year