Shenyi-Z / TaylorSeerLinks
[ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers
☆291Updated last month
Alternatives and similar repositories for TaylorSeer
Users that are interested in TaylorSeer are comparing it to the libraries listed below
Sorting:
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆180Updated 6 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆481Updated last week
- [NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation☆517Updated 2 weeks ago
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional var…☆143Updated 3 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆249Updated 9 months ago
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆159Updated 11 months ago
- 📚 Collection of awesome generation acceleration resources.☆340Updated 2 months ago
- ☆178Updated 8 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆381Updated 3 months ago
- (ToCa-v2) A New version of ToCa,with faster speed and better acceleration!☆38Updated 6 months ago
- DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space☆183Updated this week
- [NeurIPS 2025] Training-Free Efficient Video Generation via Dynamic Token Carving☆246Updated 2 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆210Updated last week
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆205Updated last week
- A Unified Cache Acceleration Framework for 🤗 Diffusers: Qwen-Image-Lightning, Qwen-Image, HunyuanImage, FLUX, Wan, etc.☆371Updated this week
- The official code for "MagCache: Fast Video Generation with Magnitude-Aware Cache"☆205Updated last week
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.☆55Updated this week
- [ICCV 2025][Few-Step Student Surpasses Teacher Diffusion] Learning Few-Step Diffusion Models by Trajectory Distribution Matching☆52Updated last month
- Official code for ICCV 2025 paper, X2I: Seamless Integration of Multimodal Understanding into Diffusion Transformer via Attention Distill…☆84Updated 3 months ago
- [ICLR 2025] Official Implementation of Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image…☆329Updated last week
- SpargeAttention: A training-free sparse attention that can accelerate any model inference.☆729Updated last week
- Unofficial extension implementation of Self-Forcing to support I2V && 14B training.☆189Updated this week
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆307Updated 9 months ago
- [ICCV2025] The code of our work "Golden Noise for Diffusion Models: A Learning Framework".☆185Updated last month
- ☆546Updated last week
- UniWorld: High-Resolution Semantic Encoders for Unified Visual Understanding and Generation☆708Updated 2 months ago
- Light Video Generation Inference Framework☆614Updated this week
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆49Updated last year
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆44Updated this week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆123Updated 6 months ago