xlite-dev / Awesome-DiT-InferenceLinks
📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉
☆319Updated last week
Alternatives and similar repositories for Awesome-DiT-Inference
Users that are interested in Awesome-DiT-Inference are comparing it to the libraries listed below
Sorting:
- SpargeAttention: A training-free sparse attention that can accelerate any model inference.☆645Updated 3 weeks ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆530Updated 3 months ago
- A sparse attention kernel supporting mix sparse patterns☆249Updated 5 months ago
- ☆170Updated 6 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆413Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆528Updated last month
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆103Updated 3 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆67Updated 5 months ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆695Updated 7 months ago
- [ICML2025] Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity☆364Updated last month
- flash attention tutorial written in python, triton, cuda, cutlass☆380Updated 2 months ago
- 📚 Collection of awesome generation acceleration resources.☆282Updated last week
- Puzzles for learning Triton, play it with minimal environment configuration!☆401Updated 7 months ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆350Updated last year
- 🤗CacheDiT: A Training-free and Easy-to-use Cache Acceleration Toolbox for Diffusion Transformers🔥☆99Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆717Updated 4 months ago
- Efficient LLM Inference over Long Sequences☆382Updated 3 weeks ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆215Updated last month
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆191Updated last week
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆329Updated last week
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.🎉☆189Updated 2 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆473Updated 5 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- kernels, of the mega variety☆441Updated last month
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆404Updated 7 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆133Updated 3 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆303Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆880Updated this week
- To pioneer training long-context multi-modal transformer models☆42Updated last month
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆510Updated last week