latentCall145 / channels-last-groupnormView external linksLinks
A CUDA kernel for NHWC GroupNorm for PyTorch
☆22Nov 15, 2024Updated last year
Alternatives and similar repositories for channels-last-groupnorm
Users that are interested in channels-last-groupnorm are comparing it to the libraries listed below
Sorting:
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Nov 18, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- trt-hackathon-2022 三等奖方案☆10Mar 6, 2023Updated 2 years ago
- ☆26Aug 15, 2023Updated 2 years ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148May 10, 2025Updated 9 months ago
- Cute layout visualization☆30Jan 18, 2026Updated 3 weeks ago
- Kernel Library Wheel for SGLang☆17Updated this week
- ☆14Nov 3, 2025Updated 3 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- 搜藏的希望的代码片段☆13Jun 6, 2023Updated 2 years ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Sep 8, 2024Updated last year
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- ☆19Aug 23, 2022Updated 3 years ago
- ☆32Jul 2, 2025Updated 7 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Feb 9, 2026Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆143May 29, 2025Updated 8 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- ☆20Oct 11, 2023Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆20Sep 28, 2024Updated last year
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Aug 16, 2023Updated 2 years ago
- Step by step implementation of a fast softmax kernel in CUDA☆60Jan 6, 2025Updated last year
- apollo r3.0 感知移植(参考百度开发者套件)☆19Oct 29, 2019Updated 6 years ago
- ☆165Feb 5, 2026Updated last week
- ☆24Oct 10, 2022Updated 3 years ago
- ☆52May 19, 2025Updated 8 months ago
- CUTLASS and CuTe Examples☆128Nov 30, 2025Updated 2 months ago
- ☆104Nov 7, 2024Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆105Jul 27, 2018Updated 7 years ago
- ☆26Feb 17, 2025Updated last year
- ☆90Jun 30, 2023Updated 2 years ago
- ☆88May 31, 2025Updated 8 months ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆80Dec 18, 2025Updated last month
- Parsers for CUDA binary files☆25Dec 29, 2023Updated 2 years ago
- BEVFusion implementation in ROS2☆32Apr 15, 2025Updated 10 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- ☆190Jan 14, 2025Updated last year
- 对 tensorRT_Pro 开源项目理解☆22Feb 23, 2023Updated 2 years ago