[WIP] Better (FP8) attention for Hopper
☆32Feb 24, 2025Updated last year
Alternatives and similar repositories for QuantumAttention
Users that are interested in QuantumAttention are comparing it to the libraries listed below
Sorting:
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- Kernel Library Wheel for SGLang☆16Updated this week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- A CUDA kernel for NHWC GroupNorm for PyTorch☆22Nov 15, 2024Updated last year
- ☆191Jan 14, 2025Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆52May 19, 2025Updated 10 months ago
- ☆33Feb 3, 2025Updated last year
- Triton kernels for Flux☆22Jul 7, 2025Updated 8 months ago
- ☆20Sep 28, 2024Updated last year
- ☆64Updated this week
- ☆26Feb 17, 2025Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 6 months ago
- Parsers for CUDA binary files☆24Dec 29, 2023Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated last week
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆19Dec 24, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation☆46Mar 6, 2026Updated 2 weeks ago
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆70Apr 14, 2025Updated 11 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆636Mar 6, 2026Updated 2 weeks ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- ☆90May 31, 2025Updated 9 months ago
- ☆87Jan 23, 2025Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- Triton based sparse quantization attention kernel collection☆43Aug 29, 2025Updated 6 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆84Updated this week
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆29Jan 22, 2026Updated last month
- ☆80Dec 27, 2024Updated last year
- 📊 Research-focused SDXL training framework exploring novel optimization approaches. Goals include enhanced image quality, training stabi…☆21Jun 7, 2025Updated 9 months ago
- ☆116May 16, 2025Updated 10 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆425Jul 5, 2025Updated 8 months ago