latentCall145 / channels-last-groupnorm
A CUDA kernel for NHWC GroupNorm for PyTorch
☆18Updated 5 months ago
Alternatives and similar repositories for channels-last-groupnorm:
Users that are interested in channels-last-groupnorm are comparing it to the libraries listed below
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated 3 weeks ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 6 months ago
- Quantized Attention on GPU☆45Updated 5 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 7 months ago
- [WIP] Better (FP8) attention for Hopper☆30Updated 2 months ago
- ☆28Updated 2 months ago
- ☆43Updated last week
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Updated 5 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆73Updated 3 weeks ago
- ☆11Updated last month
- ☆55Updated 2 weeks ago
- Awesome code, projects, books, etc. related to CUDA☆16Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆63Updated 8 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆29Updated 4 months ago
- GPTQ inference TVM kernel☆38Updated last year
- OneFlow Serving☆20Updated 2 weeks ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆39Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆91Updated 3 weeks ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆15Updated this week
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆13Updated 6 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆44Updated 9 months ago
- ☆60Updated this week
- ☆30Updated last year
- ☆92Updated 7 months ago
- ☆19Updated 6 months ago
- ☆68Updated 3 months ago
- Implement Flash Attention using Cute.☆76Updated 4 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated last week
- ☆67Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated last month