xlite-dev / Awesome-DiT-InferenceLinks
📚A curated list of Awesome Diffusion Inference Papers with codes: Sampling, Caching, Multi-GPUs, etc. 🎉🎉
☆256Updated this week
Alternatives and similar repositories for Awesome-DiT-Inference
Users that are interested in Awesome-DiT-Inference are comparing it to the libraries listed below
Sorting:
- Model Compression Toolbox for Large Language Models and Diffusion Models☆489Updated 2 months ago
- ☆167Updated 4 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆376Updated this week
- SpargeAttention: A training-free sparse attention that can accelerate any model inference.☆583Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆510Updated last week
- 📚 Collection of awesome generation acceleration resources.☆257Updated last month
- A sparse attention kernel supporting mix sparse patterns☆222Updated 3 months ago
- [ICML2025] Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity☆310Updated this week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆98Updated 2 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆370Updated 3 weeks ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆203Updated 2 weeks ago
- A parallelism VAE avoids OOM for high resolution image generation☆64Updated 4 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆334Updated 6 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆290Updated 3 weeks ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆347Updated last year
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆160Updated this week
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆684Updated 6 months ago
- Distributed Triton for Parallel Systems☆775Updated last week
- Ring attention implementation with flash attention☆774Updated 2 weeks ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆65Updated this week
- 📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.☆184Updated 3 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆271Updated last year
- Efficient LLM Inference over Long Sequences☆376Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆291Updated 6 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆384Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆690Updated 3 months ago
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆182Updated 3 months ago
- Accelerating Diffusion Transformers with Token-wise Feature Caching☆152Updated 2 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆448Updated 10 months ago
- ☆70Updated 5 months ago