Shenyi-Z / ToCaLinks
Accelerating Diffusion Transformers with Token-wise Feature Caching
β159Updated 3 months ago
Alternatives and similar repositories for ToCa
Users that are interested in ToCa are comparing it to the libraries listed below
Sorting:
- From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersβ179Updated last month
- π Collection of awesome generation acceleration resources.β270Updated 2 months ago
- (ToCa-v2) A New version of ToCaοΌwith faster speed and better acceleration!β37Updated 3 months ago
- β167Updated 5 months ago
- Adaptive Caching for Faster Video Generation with Diffusion Transformersβ150Updated 7 months ago
- [ICML2025] Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsityβ344Updated 2 weeks ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generationβ102Updated 3 months ago
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional varβ¦β130Updated 4 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Qualityβ231Updated 5 months ago
- Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".β206Updated 2 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Cachingβ105Updated 11 months ago
- X2I: Seamless Integration of Multimodal Understanding into Diffusion Transformer via Attention Distillationβ73Updated 2 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoisingβ202Updated 4 months ago
- β310Updated last week
- Official Implementation: Training-Free Efficient Video Generation via Dynamic Token Carvingβ203Updated last week
- β78Updated 2 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization forβ¦β65Updated last week
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.β47Updated 11 months ago
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallowβ123Updated 2 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Localityβ¦β49Updated 2 months ago
- The code of our work "Golden Noise for Diffusion Models: A Learning Framework".β156Updated 4 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-projectβ164Updated 3 months ago
- STAR: Scale-wise Text-to-image generation via Auto-Regressive representationsβ142Updated 4 months ago
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"β302Updated 6 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficientβ101Updated 2 months ago
- Combining Teacache with xDiT to Accelerate Visual Generation Modelsβ25Updated 2 months ago
- [CVPR 2025] The official implementation of "CacheQuant: Comprehensively Accelerated Diffusion Models"β24Updated last month
- An Efficient Text-to-Image Generation Pretrain Pipelineβ109Updated 2 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic cachingβ300Updated last month
- [Few-Step Student Surpasses Teacher Diffusion] Learning Few-Step Diffusion Models by Trajectory Distribution Matchingβ41Updated 3 months ago