☆192Jan 14, 2025Updated last year
Alternatives and similar repositories for DiTFastAttn
Users that are interested in DiTFastAttn are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆55Jul 8, 2024Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆156Mar 21, 2025Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆119Jul 15, 2024Updated last year
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆215Mar 14, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆655Mar 6, 2026Updated last month
- [WIP] Better (FP8) attention for Hopper☆33Feb 24, 2025Updated last year
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,597Apr 9, 2026Updated last week
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆171Nov 5, 2024Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆90Mar 12, 2026Updated last month
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆77Sep 3, 2024Updated last year
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆48Jul 17, 2025Updated 9 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆774Aug 14, 2025Updated 8 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 9 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!☆417Feb 26, 2025Updated last year
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆58Jun 26, 2025Updated 9 months ago
- 📚 Collection of awesome generation acceleration resources.☆398Jul 7, 2025Updated 9 months ago
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆65Jul 1, 2025Updated 9 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆214Sep 27, 2025Updated 6 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆976Feb 25, 2026Updated last month
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆262Dec 27, 2024Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆214Sep 27, 2025Updated 6 months ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers☆388Mar 2, 2026Updated last month
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- VideoSys: An easy and efficient system for video generation☆2,021Aug 27, 2025Updated 7 months ago
- End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).☆397Jan 8, 2026Updated 3 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Mar 25, 2025Updated last year
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆727Dec 2, 2024Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆212Nov 25, 2025Updated 4 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Code for Draft Attention☆102May 22, 2025Updated 10 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆664Jan 15, 2026Updated 3 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆69Jun 4, 2024Updated last year
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆965Jun 27, 2024Updated last year
- Inference-only implementation of "One-Step Diffusion Distillation through Score Implicit Matching" [NIPS 2024]☆83Nov 17, 2024Updated last year
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆538Mar 19, 2026Updated last month
- A unified inference and post-training framework for accelerated video generation.☆3,396Updated this week