thu-ml / SageAttentionLinks
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
☆2,826Updated last week
Alternatives and similar repositories for SageAttention
Users that are interested in SageAttention are comparing it to the libraries listed below
Sorting:
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,474Updated this week
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆3,461Updated last month
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆826Updated this week
- Model Compression Toolbox for Large Language Models and Diffusion Models☆710Updated 4 months ago
- Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model☆1,198Updated 6 months ago
- A unified inference and post-training framework for accelerated video generation.☆2,814Updated this week
- Light Video Generation Inference Framework☆1,133Updated last week
- Fork of the Triton language and compiler for Windows support and easy installation☆1,657Updated last week
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆403Updated 5 months ago
- A pipeline parallel training script for diffusion models.☆1,769Updated last week
- 🤗A PyTorch-native Inference Engine with Hybrid Cache Acceleration and Parallelism for DiTs: Z-Image, FLUX2, Qwen-Image, etc.☆761Updated this week
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆468Updated 2 weeks ago
- Scalable and memory-optimized training of diffusion models☆1,309Updated 6 months ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆713Updated last year
- ☆1,547Updated this week
- [NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation☆563Updated last month
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,524Updated last month
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆584Updated this week
- Qwen-Image-Lightning: Speed up Qwen-Image model with distillation☆1,044Updated 2 weeks ago
- (CVPR 2025) From Slow Bidirectional to Fast Autoregressive Video Diffusion Models☆1,099Updated 4 months ago
- https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,296Updated 8 months ago
- HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation☆2,584Updated last month
- https://wavespeed.ai/ [WIP] The all in one inference optimization solution for ComfyUI, universal, flexible, and fast.☆1,199Updated 4 months ago
- 📹 A more flexible framework that can generate videos at any resolution and creates videos from images.☆1,643Updated 2 weeks ago
- VideoSys: An easy and efficient system for video generation☆2,008Updated 3 months ago
- (NeurIPS 2024 Oral 🔥) Improved Distribution Matching Distillation for Fast Image Synthesis☆1,114Updated 9 months ago
- Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference☆1,222Updated last month
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆949Updated last year
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆597Updated last week
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆572Updated 3 weeks ago