mit-han-lab / distrifuserView external linksLinks
[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
☆721Dec 2, 2024Updated last year
Alternatives and similar repositories for distrifuser
Users that are interested in distrifuser are comparing it to the libraries listed below
Sorting:
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,527Feb 5, 2026Updated last week
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆952Jun 27, 2024Updated last year
- VideoSys: An easy and efficient system for video generation☆2,017Aug 27, 2025Updated 5 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Jan 23, 2025Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Nov 29, 2024Updated last year
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆212Sep 27, 2025Updated 4 months ago
- OneDiff: An out-of-the-box acceleration library for diffusion models.☆1,966Dec 4, 2025Updated 2 months ago
- T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!☆415Feb 26, 2025Updated 11 months ago
- [NeurIPS 2024] Official implementation of "Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models"☆350Mar 16, 2025Updated 10 months ago
- https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,299Mar 27, 2025Updated 10 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆753Aug 14, 2025Updated 6 months ago
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆313Dec 23, 2024Updated last year
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated 3 weeks ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Jul 15, 2024Updated last year
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,279Oct 31, 2024Updated last year
- [ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.☆510Mar 7, 2024Updated last year
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆627Feb 3, 2026Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.☆1,917Oct 30, 2025Updated 3 months ago
- A unified inference and post-training framework for accelerated video generation.☆3,059Feb 7, 2026Updated last week
- PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation☆1,897Oct 31, 2024Updated last year
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆214Sep 27, 2025Updated 4 months ago
- A throughput-oriented high-performance serving framework for LLMs☆945Oct 29, 2025Updated 3 months ago
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,336May 31, 2024Updated last year
- (NeurIPS 2024 Oral 🔥) Improved Distribution Matching Distillation for Fast Image Synthesis☆1,226Mar 5, 2025Updated 11 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,251Feb 16, 2025Updated 11 months ago
- ☆190Jan 14, 2025Updated last year
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆518Jan 18, 2026Updated 3 weeks ago
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆166Nov 5, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- Consistency Distilled Diff VAE☆2,207Nov 7, 2023Updated 2 years ago
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆3,673Updated this week
- Official Repository of the paper "Trajectory Consistency Distillation"☆359Apr 28, 2024Updated last year
- [NeurIPS 2022, T-PAMI 2023] Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models☆268Mar 18, 2024Updated last year
- Code repository for T2V-Turbo and T2V-Turbo-v2☆310Jan 31, 2025Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 6 months ago