alibaba / SRDiffusionLinks
Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation
☆18Updated 5 months ago
Alternatives and similar repositories for SRDiffusion
Users that are interested in SRDiffusion are comparing it to the libraries listed below
Sorting:
- ☆186Updated 10 months ago
- A sparse attention kernel supporting mix sparse patterns☆376Updated 9 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆82Updated 3 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆254Updated 4 months ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆137Updated last week
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆29Updated 7 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆556Updated this week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆132Updated 7 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆47Updated 3 weeks ago
- ☆435Updated 3 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆249Updated 3 months ago
- 📚 Collection of awesome generation acceleration resources.☆363Updated 4 months ago
- ☆143Updated this week
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆195Updated 8 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆154Updated last month
- Efficient triton implementation of Native Sparse Attention.☆248Updated 5 months ago
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.☆77Updated 3 weeks ago
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional var…☆146Updated 4 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆575Updated last month
- High performance inference engine for diffusion models☆95Updated 2 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆60Updated 4 months ago
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆442Updated 3 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆779Updated this week
- To pioneer training long-context multi-modal transformer models☆62Updated 3 months ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers☆329Updated 3 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆598Updated last month
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆90Updated 2 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆176Updated last year
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆185Updated this week
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year