HydraQYH / hp_rms_normView external linksLinks
High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)
☆26Jan 22, 2026Updated 3 weeks ago
Alternatives and similar repositories for hp_rms_norm
Users that are interested in hp_rms_norm are comparing it to the libraries listed below
Sorting:
- Expert Specialization MoE Solution based on CUTLASS☆27Jan 19, 2026Updated 3 weeks ago
- ☆38Aug 7, 2025Updated 6 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- Quartet II Official Code☆43Feb 2, 2026Updated 2 weeks ago
- deepstream + cuda,yolo26,yolo-master,yolo11,yolov8,sam,transformer, etc.☆35Feb 7, 2026Updated last week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Feb 9, 2026Updated last week
- Github repo for ICLR-2025 paper, Fine-tuning Large Language Models with Sparse Matrices☆24Feb 2, 2026Updated 2 weeks ago
- ☆51Apr 30, 2025Updated 9 months ago
- ☆155Mar 4, 2025Updated 11 months ago
- Cookbook of SGLang - Recipe☆73Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148May 10, 2025Updated 9 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- ☆20Dec 24, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Updated this week
- SGEMM optimization with cuda step by step☆21Mar 23, 2024Updated last year
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆37Feb 6, 2026Updated last week
- Awesome code, projects, books, etc. related to CUDA☆30Feb 3, 2026Updated last week
- Codebase for Cuda Learning☆31Jul 13, 2024Updated last year
- ☆88May 31, 2025Updated 8 months ago
- From Minimal GEMM to Everything☆104Dec 31, 2025Updated last month
- Nex Venus Communication Library☆72Nov 17, 2025Updated 2 months ago
- Writing a CUDA software ray tracing renderer with Analysis-Driven Optimization from scratch: a python-importable, distributed parallel re…☆37Oct 5, 2025Updated 4 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- Asynchronous pipeline parallel optimization☆19Feb 2, 2026Updated 2 weeks ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- ☆86Updated this week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Aug 12, 2024Updated last year
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 10 months ago
- This project is intended to build and deploy an SNPE model on Qualcomm Devices, which are having unsupported layers which are not part of…☆10Oct 4, 2021Updated 4 years ago
- a size profiler for cuda binary☆72Jan 15, 2026Updated last month
- DeepSeek-V3/R1 inference performance simulator☆177Mar 27, 2025Updated 10 months ago
- ☆261Jul 11, 2024Updated last year
- flex-block-attn: an efficient block sparse attention computation library☆108Dec 26, 2025Updated last month
- ☆20Sep 11, 2025Updated 5 months ago
- ☆20May 24, 2025Updated 8 months ago