[WIP] Better (FP8) attention for Hopper
☆33Feb 24, 2025Updated last year
Alternatives and similar repositories for QuantumAttention
Users that are interested in QuantumAttention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆34Jul 29, 2025Updated 9 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- SGLang Kernel Wheel Index☆22Apr 21, 2026Updated last week
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆52May 19, 2025Updated 11 months ago
- ☆33Feb 3, 2025Updated last year
- Triton kernels for Flux☆23Jul 7, 2025Updated 9 months ago
- ☆20Sep 28, 2024Updated last year
- ☆65Updated this week
- ☆26Feb 17, 2025Updated last year
- Official repository Flash Local Linear Attention☆23Apr 23, 2026Updated last week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Parsers for CUDA binary files☆24Dec 29, 2023Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆91Apr 21, 2026Updated last week
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆20Dec 24, 2024Updated last year
- ☆66Apr 26, 2025Updated last year
- KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation☆50Mar 30, 2026Updated 3 weeks ago
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated 2 years ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆71Apr 14, 2025Updated last year
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆659Mar 6, 2026Updated last month
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- Distributed parallel 3D-Causal-VAE for efficient training and inference☆47Aug 20, 2025Updated 8 months ago
- ☆98May 31, 2025Updated 11 months ago
- ☆87Jan 23, 2025Updated last year
- Triton based sparse quantization attention kernel collection☆43Aug 29, 2025Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- Multi-Level Triton Runner supporting Python, IR, PTX, AMDGCN, cubin and hasco.☆95Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 3 months ago
- 📊 Research-focused SDXL training framework exploring novel optimization approaches. Goals include enhanced image quality, training stabi…☆20Jun 7, 2025Updated 10 months ago
- ☆81Dec 27, 2024Updated last year
- ☆120May 16, 2025Updated 11 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 9 months ago