High performance inference engine for diffusion models
☆107Sep 5, 2025Updated 6 months ago
Alternatives and similar repositories for DAX
Users that are interested in DAX are comparing it to the libraries listed below
Sorting:
- Triton based sparse quantization attention kernel collection☆43Aug 29, 2025Updated 6 months ago
- ☆52May 19, 2025Updated 10 months ago
- Distributed parallel 3D-Causal-VAE for efficient training and inference☆47Aug 20, 2025Updated 7 months ago
- ☆32Jul 2, 2025Updated 8 months ago
- Principles and Methodologies for Serial Performance Optimization (OSDI' 25)☆27Jun 5, 2025Updated 9 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- [ICLR 2026] Official implementation of DiCache: Let Diffusion Model Determine Its Own Cache☆58Jan 26, 2026Updated last month
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- Inferix: A Block-Diffusion based Next-Generation Inference Engine for World Simulation☆118Feb 27, 2026Updated 3 weeks ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆425Jul 5, 2025Updated 8 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆74Sep 3, 2024Updated last year
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 6 months ago
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆57Mar 5, 2025Updated last year
- A Triton JIT runtime and ffi provider in C++☆32Updated this week
- Tile-based language built for AI computation across all scales☆138Updated this week
- FlagCX is a scalable and adaptive cross-chip communication library.☆179Updated this week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,566Mar 13, 2026Updated last week
- Flexible and Pluggable Serving Engine for Diffusion LLMs☆64Updated this week
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆95Jan 16, 2026Updated 2 months ago
- ☆16Sep 12, 2023Updated 2 years ago
- To pioneer training long-context multi-modal transformer models☆71Aug 8, 2025Updated 7 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆91Jan 26, 2026Updated last month
- [ECCV 2024] 3DPE: Real-time 3D-aware Portrait Editing from a Single Image☆22Sep 15, 2025Updated 6 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- JAX bindings for the flash-attention3 kernels☆21Jan 2, 2026Updated 2 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆48Feb 17, 2026Updated last month
- Gensis is a lightweight deep learning framework written from scratch in Python, with Triton as its backend for high-performance computing…☆37Jan 15, 2026Updated 2 months ago
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆526Updated this week
- Venus Collective Communication Library, supported by SII and Infrawaves.☆140Mar 4, 2026Updated 2 weeks ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Mar 12, 2026Updated last week
- Tracking the latest and greatest research papers on diffusion large language models.☆23Mar 13, 2026Updated last week
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆103Sep 8, 2025Updated 6 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆273Jul 6, 2025Updated 8 months ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆15Oct 11, 2024Updated last year