[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
☆724Dec 2, 2024Updated last year
Alternatives and similar repositories for distrifuser
Users that are interested in distrifuser are comparing it to the libraries listed below
Sorting:
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,549Feb 26, 2026Updated last week
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆957Jun 27, 2024Updated last year
- VideoSys: An easy and efficient system for video generation☆2,016Aug 27, 2025Updated 6 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Jan 23, 2025Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆212Sep 27, 2025Updated 5 months ago
- OneDiff: An out-of-the-box acceleration library for diffusion models.☆1,970Dec 4, 2025Updated 3 months ago
- T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!☆415Feb 26, 2025Updated last year
- https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,303Mar 27, 2025Updated 11 months ago
- [NeurIPS 2024] Official implementation of "Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models"☆350Mar 16, 2025Updated 11 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆761Aug 14, 2025Updated 6 months ago
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆314Dec 23, 2024Updated last year
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆117Jul 15, 2024Updated last year
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,281Oct 31, 2024Updated last year
- [ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.☆510Mar 7, 2024Updated last year
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆629Feb 3, 2026Updated last month
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.☆1,920Oct 30, 2025Updated 4 months ago
- PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation☆1,897Oct 31, 2024Updated last year
- A unified inference and post-training framework for accelerated video generation.☆3,111Feb 28, 2026Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- Ring attention implementation with flash attention☆987Sep 10, 2025Updated 5 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆215Sep 27, 2025Updated 5 months ago
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,393May 31, 2024Updated last year
- (NeurIPS 2024 Oral 🔥) Improved Distribution Matching Distillation for Fast Image Synthesis☆1,239Mar 5, 2025Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,252Feb 16, 2025Updated last year
- ☆191Jan 14, 2025Updated last year
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆525Feb 25, 2026Updated last week
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆167Nov 5, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year
- Consistency Distilled Diff VAE☆2,211Nov 7, 2023Updated 2 years ago
- Official Repository of the paper "Trajectory Consistency Distillation"☆362Apr 28, 2024Updated last year
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆3,703Feb 14, 2026Updated 3 weeks ago
- [NeurIPS 2022, T-PAMI 2023] Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models☆268Mar 18, 2024Updated last year
- Code repository for T2V-Turbo and T2V-Turbo-v2☆314Jan 31, 2025Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 7 months ago