High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)
☆30Jan 22, 2026Updated 2 months ago
Alternatives and similar repositories for hp_rms_norm
Users that are interested in hp_rms_norm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Expert Specialization MoE Solution based on CUTLASS☆27Jan 19, 2026Updated 2 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- Cute layout visualization☆33Jan 18, 2026Updated 2 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Quartet II Official Code☆63Mar 23, 2026Updated last week
- ☆155Mar 4, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 18, 2026Updated last week
- Github repo for ICLR-2025 paper, Fine-tuning Large Language Models with Sparse Matrices☆25Feb 2, 2026Updated last month
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 10 months ago
- ☆13Jan 7, 2025Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- Nex Venus Communication Library☆74Nov 17, 2025Updated 4 months ago
- ☆52Apr 30, 2025Updated 11 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆172Feb 11, 2026Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- Codebase for Cuda Learning☆31Jul 13, 2024Updated last year
- ☆94May 31, 2025Updated 9 months ago
- GEMV implementation with CUTLASS☆19Aug 21, 2025Updated 7 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 6 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 7 months ago
- ☆15Feb 23, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Cookbook of SGLang - Recipe☆106Updated this week
- DeepSeek-V3/R1 inference performance simulator☆191Mar 27, 2025Updated last year
- Awesome code, projects, books, etc. related to CUDA☆32Feb 3, 2026Updated last month
- cutile kernel examples☆40Feb 6, 2026Updated last month
- ☆32Jul 2, 2025Updated 8 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆58Feb 6, 2026Updated last month
- Asynchronous pipeline parallel optimization☆19Feb 2, 2026Updated last month
- ☆20Dec 24, 2024Updated last year
- a size profiler for cuda binary☆71Jan 15, 2026Updated 2 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆31Mar 28, 2025Updated last year
- COCCL: Compression and precision co-aware collective communication library☆30Mar 16, 2025Updated last year
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- ☆150Mar 18, 2024Updated 2 years ago
- SGEMM optimization with cuda step by step☆22Mar 23, 2024Updated 2 years ago
- High-performance LLM operator library built on TileLang.☆96Updated this week