chengzeyi / ParaAttentionLinks
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
☆416Updated 6 months ago
Alternatives and similar repositories for ParaAttention
Users that are interested in ParaAttention are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation☆575Updated 2 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆744Updated 5 months ago
- End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).☆392Updated 3 weeks ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers☆360Updated 5 months ago
- 🤗 A PyTorch-native and Flexible Inference Engine with Hybrid Cache Acceleration and Parallelism for DiTs.☆929Updated this week
- Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model☆1,235Updated 7 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆258Updated last year
- The official code for NeurIPS 2025 "MagCache: Fast Video Generation with Magnitude-Aware Cache"☆254Updated 2 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆621Updated last month
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆212Updated 4 months ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models