thu-ml / SpargeAttnLinks
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
β916Updated 3 weeks ago
Alternatives and similar repositories for SpargeAttn
Users that are interested in SpargeAttn are comparing it to the libraries listed below
Sorting:
- Model Compression Toolbox for Large Language Models and Diffusion Modelsβ744Updated 5 months ago
- πA curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.πβ504Updated last week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ622Updated this week
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attentionβ621Updated last month
- π€ A PyTorch-native and Flexible Inference Engine with Hybrid Cache Acceleration and Parallelism for DiTs.β929Updated this week
- A sparse attention kernel supporting mix sparse patternsβ442Updated last week
- [NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generationβ575Updated 2 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic cachingβ416Updated 6 months ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersβ360Updated 5 months ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Modelsβ718Updated last year
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-tβ¦β3,094Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ637Updated 2 weeks ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable SparseβLinear Attentionβ258Updated last week
- β191Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generationβ146Updated 10 months ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Cachingβ206Updated 10 months ago
- π Collection of awesome generation acceleration resources.β383Updated 6 months ago
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.β82Updated 3 months ago
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelismβ2,512Updated last week
- A parallelism VAE avoids OOM for high resolution image generationβ85Updated 5 months ago
- β444Updated 5 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β959Updated 10 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"β799Updated 2 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ266Updated 6 months ago
- High performance inference engine for diffusion modelsβ103Updated 4 months ago
- Efficient triton implementation of Native Sparse Attention.β261Updated 8 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zooβ1,576Updated this week
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional varβ¦β150Updated 7 months ago
- Combining Teacache with xDiT to Accelerate Visual Generation Modelsβ32Updated 9 months ago
- Timestep Embedding Tells: It's Time to Cache for Video Diffusion Modelβ1,235Updated 7 months ago