High performance inference engine for diffusion models
☆105Sep 5, 2025Updated 5 months ago
Alternatives and similar repositories for DAX
Users that are interested in DAX are comparing it to the libraries listed below
Sorting:
- Triton based sparse quantization attention kernel collection☆40Aug 29, 2025Updated 6 months ago
- ☆52May 19, 2025Updated 9 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆32Jul 2, 2025Updated 7 months ago
- A Triton JIT runtime and ffi provider in C++☆31Updated this week
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆58Oct 27, 2025Updated 4 months ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆147May 10, 2025Updated 9 months ago
- [ICLR 2026] Official implementation of DiCache: Let Diffusion Model Determine Its Own Cache☆55Jan 26, 2026Updated last month
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆422Jul 5, 2025Updated 7 months ago
- Tile-based language built for AI computation across all scales☆138Updated this week
- [ECCV 2024] 3DPE: Real-time 3D-aware Portrait Editing from a Single Image☆22Sep 15, 2025Updated 5 months ago
- High Performance KV Cache Store for LLM☆47Updated this week
- a simple API to use CUPTI☆11Aug 19, 2025Updated 6 months ago
- Distributed parallel 3D-Causal-VAE for efficient training and inference☆47Aug 20, 2025Updated 6 months ago
- Venus Collective Communication Library, supported by SII and Infrawaves.☆138Updated this week
- Principles and Methodologies for Serial Performance Optimization (OSDI' 25)☆25Jun 5, 2025Updated 8 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Jan 26, 2026Updated last month
- A forked version of flux-fast that makes flux-fast even faster with cache-dit, 3.3x speedup on NVIDIA L20.☆24Jul 18, 2025Updated 7 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆46Feb 17, 2026Updated last week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆251Feb 13, 2026Updated 2 weeks ago
- Flexible and Pluggable Serving Engine for Diffusion LLMs☆58Feb 14, 2026Updated 2 weeks ago
- OpenVE-3M: A Large-Scale High-Quality Dataset for Instruction-Guided Video Editing☆38Jan 9, 2026Updated last month
- ☆62Feb 15, 2026Updated last week
- ☆16Sep 12, 2023Updated 2 years ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 2 weeks ago
- JAX bindings for the flash-attention3 kernels☆21Jan 2, 2026Updated last month
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,544Feb 21, 2026Updated last week
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆74Sep 3, 2024Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆52Mar 5, 2025Updated 11 months ago
- ☆53Updated this week
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆525Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆174Updated this week
- ☆27Jan 7, 2025Updated last year