thu-nics / DiTFastAttnView external linksLinks
☆190Jan 14, 2025Updated last year
Alternatives and similar repositories for DiTFastAttn
Users that are interested in DiTFastAttn are comparing it to the libraries listed below
Sorting:
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆52Jul 8, 2024Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆149Mar 21, 2025Updated 10 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆117Jul 15, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Nov 29, 2024Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆210Mar 14, 2025Updated 11 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆74Sep 3, 2024Updated last year
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆166Nov 5, 2024Updated last year
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆627Feb 3, 2026Updated 2 weeks ago
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,539Updated this week
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 6 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆753Aug 14, 2025Updated 6 months ago
- T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!☆415Feb 26, 2025Updated 11 months ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆55Jun 26, 2025Updated 7 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆214Sep 27, 2025Updated 4 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆47Jul 17, 2025Updated 6 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆939Dec 31, 2025Updated last month
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Mar 25, 2025Updated 10 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆422Jul 5, 2025Updated 7 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆212Sep 27, 2025Updated 4 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆259Dec 27, 2024Updated last year
- 📚 Collection of awesome generation acceleration resources.☆388Jul 7, 2025Updated 7 months ago
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆62Jul 1, 2025Updated 7 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆22Nov 15, 2024Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).☆393Jan 8, 2026Updated last month
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Nov 18, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated last month
- VideoSys: An easy and efficient system for video generation☆2,017Aug 27, 2025Updated 5 months ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆721Dec 2, 2024Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆211Nov 25, 2025Updated 2 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆75Mar 17, 2025Updated 11 months ago
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆954Jun 27, 2024Updated last year
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional var…☆150Jun 25, 2025Updated 7 months ago
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆313Dec 23, 2024Updated last year
- A unified inference and post-training framework for accelerated video generation.☆3,086Updated this week
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆3,685Updated this week