🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× vs cuBLAS
☆103Sep 8, 2025Updated 6 months ago
Alternatives and similar repositories for chipmunk
Users that are interested in chipmunk are comparing it to the libraries listed below
Sorting:
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆134Nov 26, 2025Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- A sparse attention kernel supporting mix sparse patterns☆480Jan 18, 2026Updated 2 months ago
- Automating analysis from trace files☆63Mar 13, 2026Updated last week
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆646Mar 6, 2026Updated 2 weeks ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆212Sep 27, 2025Updated 5 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 8 months ago
- ☆32Jul 2, 2025Updated 8 months ago
- EleutherAI ML Performance reading group repository (slides, meeting recordings, annotated papers)☆31Dec 19, 2025Updated 3 months ago
- Code for Draft Attention☆100May 22, 2025Updated 10 months ago
- Wan: Open and Advanced Large-Scale Video Generative Models☆28Jul 28, 2025Updated 7 months ago
- Aiming to integrate most existing feature caching-based diffusion acceleration schemes into a unified framework.☆97Oct 23, 2025Updated 4 months ago
- This repository includes the official implementation of our paper "Grouping First, Attending Smartly: Training-Free Acceleration for Diff…☆55May 21, 2025Updated 10 months ago
- Making Flux go brrr on GPUs.☆163Jan 5, 2026Updated 2 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆961Feb 25, 2026Updated 3 weeks ago
- [NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation☆587Nov 11, 2025Updated 4 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- The official implementation of "Sparse-vDiT: Unleashing the Power of Sparse Attention to Accelerate Video Diffusion Transformers" (arXiv …☆51Jun 6, 2025Updated 9 months ago
- [ICML 2021] "Double-Win Quant: Aggressively Winning Robustness of Quantized DeepNeural Networks via Random Precision Training and Inferen…☆16Feb 13, 2022Updated 4 years ago
- flex-block-attn: an efficient block sparse attention computation library☆127Dec 26, 2025Updated 2 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆676Updated this week
- [TMM 2025] Official Implementation of DreamJourney: Perpetual View Generation with Video Diffusion Models☆18Jun 24, 2025Updated 8 months ago
- Fastest kernels written from scratch☆559Sep 18, 2025Updated 6 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- A Quirky Assortment of CuTe Kernels☆861Updated this week
- [ICLR 2026] Official implementation of DiCache: Let Diffusion Model Determine Its Own Cache☆58Jan 26, 2026Updated last month
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 4 months ago
- ☆261Jul 11, 2024Updated last year
- Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models (ICLR 2026)☆45Mar 3, 2026Updated 2 weeks ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆49Jan 21, 2026Updated 2 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- ☆52May 19, 2025Updated 10 months ago
- RealisMotion: Decomposed Human Motion Control and Video Generation in the World Space☆39Oct 16, 2025Updated 5 months ago
- The Golang-based library for packet manipulation and dissection☆10Mar 10, 2024Updated 2 years ago
- ☆13Jan 15, 2025Updated last year
- This is the official PyTorch implementation of "BLADE: Block-Sparse Attention Meets Step Distillation for Efficient Video Generation."☆40Oct 9, 2025Updated 5 months ago
- Quartet II Official Code☆53Mar 1, 2026Updated 2 weeks ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated last week