xlite-dev / Awesome-DiT-InferenceLinks
πA curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.π
β490Updated last month
Alternatives and similar repositories for Awesome-DiT-Inference
Users that are interested in Awesome-DiT-Inference are comparing it to the libraries listed below
Sorting:
- π€A PyTorch-native and Flexible Inference Engine with Hybrid Cache Acceleration and Parallelism for DiTs.β891Updated this week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β897Updated 2 weeks ago
- A sparse attention kernel supporting mix sparse patternsβ436Updated last week
- Model Compression Toolbox for Large Language Models and Diffusion Modelsβ732Updated 5 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ619Updated this week
- β189Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generationβ146Updated 9 months ago
- High performance inference engine for diffusion modelsβ103Updated 4 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ626Updated this week
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Modelsβ716Updated last year
- π Collection of awesome generation acceleration resources.β381Updated 6 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attentionβ616Updated last month
- A parallelism VAE avoids OOM for high resolution image generationβ84Updated 5 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic cachingβ410Updated 6 months ago
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.β83Updated 2 months ago
- β444Updated 5 months ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable SparseβLinear Attentionβ243Updated 2 weeks ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersβ355Updated 5 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!β590Updated 2 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlassβ475Updated 8 months ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Cachingβ206Updated 10 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.β659Updated last month
- Accelerating MoE with IO and Tile-aware Optimizationsβ542Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β802Updated 10 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.β626Updated this week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ263Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,313Updated 3 weeks ago
- [NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generationβ573Updated 2 months ago
- π€FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3xβπ vs SDPA EA.β245Updated last month
- Efficient LLM Inference over Long Sequencesβ393Updated 6 months ago