horseee / learning-to-cacheLinks
[NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching
☆116Updated last year
Alternatives and similar repositories for learning-to-cache
Users that are interested in learning-to-cache are comparing it to the libraries listed below
Sorting:
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆49Updated last year
- ☆182Updated 9 months ago
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆159Updated 11 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Updated 7 months ago
- Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆72Updated 3 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆66Updated last year
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆46Updated 3 months ago
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training☆185Updated 8 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆107Updated 3 weeks ago
- Curated list of methods that focuses on improving the efficiency of diffusion models☆44Updated last year
- Code for Draft Attention☆91Updated 5 months ago
- ☆87Updated 6 months ago
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow