[WIP] Better (FP8) attention for Hopper
☆32Feb 24, 2025Updated last year
Alternatives and similar repositories for QuantumAttention
Users that are interested in QuantumAttention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 25, 2026Updated 2 weeks ago
- SGLang Kernel Wheel Index☆18Updated this week
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆191Jan 14, 2025Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆52May 19, 2025Updated 10 months ago
- ☆33Feb 3, 2025Updated last year
- Triton kernels for Flux☆22Jul 7, 2025Updated 9 months ago
- ☆20Sep 28, 2024Updated last year
- ☆64Updated this week
- ☆26Feb 17, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 6 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 7 months ago
- Parsers for CUDA binary files☆24Dec 29, 2023Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated 3 weeks ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆19Dec 24, 2024Updated last year
- ☆65Apr 26, 2025Updated 11 months ago
- KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation☆48Mar 30, 2026Updated last week
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆70Apr 14, 2025Updated 11 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆651Mar 6, 2026Updated last month
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- Distributed parallel 3D-Causal-VAE for efficient training and inference☆47Aug 20, 2025Updated 7 months ago
- ☆96May 31, 2025Updated 10 months ago
- ☆87Jan 23, 2025Updated last year
- Triton based sparse quantization attention kernel collection☆43Aug 29, 2025Updated 7 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 10 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆84Mar 29, 2026Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- 📊 Research-focused SDXL training framework exploring novel optimization approaches. Goals include enhanced image quality, training stabi…☆20Jun 7, 2025Updated 10 months ago
- ☆80Dec 27, 2024Updated last year
- ☆119May 16, 2025Updated 10 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year