MingXiangL / Teacache-xDiTLinks
Combining Teacache with xDiT to Accelerate Visual Generation Models
☆28Updated 3 months ago
Alternatives and similar repositories for Teacache-xDiT
Users that are interested in Teacache-xDiT are comparing it to the libraries listed below
Sorting:
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆350Updated last month
- [ICML2025] Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity☆393Updated 2 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆241Updated 7 months ago
- ☆171Updated 6 months ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆170Updated 4 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆202Updated 5 months ago
- 🤗A Training-free and Easy-to-use Cache Acceleration Toolbox for Diffusion Transformers.☆117Updated this week
- Adaptive Caching for Faster Video Generation with Diffusion Transformers