Persistent dense gemm for Hopper in `CuTeDSL`
☆15Aug 9, 2025Updated 7 months ago
Alternatives and similar repositories for persistent_dense_gemm
Users that are interested in persistent_dense_gemm are comparing it to the libraries listed below
Sorting:
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆39Dec 14, 2025Updated 3 months ago
- GEMV implementation with CUTLASS☆19Aug 21, 2025Updated 7 months ago
- ☆53Feb 24, 2026Updated 3 weeks ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆118Sep 24, 2025Updated 5 months ago
- ☆32Jul 2, 2025Updated 8 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- ☆23Jul 11, 2025Updated 8 months ago
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆94Feb 2, 2026Updated last month
- ☆52May 19, 2025Updated 10 months ago
- ☆16Feb 24, 2026Updated 3 weeks ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- MSLK (Meta Superintelligence Labs Kernels) is a collection of PyTorch GPU operator libraries that are designed and optimized for GenAI tr…☆71Updated this week
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Apr 7, 2023Updated 2 years ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- Accelerated Computer Vision Lab (ACCV-Lab) is a systematic collection of packages with the common goal to facilitate end-to-end efficient…☆46Feb 15, 2026Updated last month
- Nex Venus Communication Library☆74Nov 17, 2025Updated 4 months ago
- Pure Triton kernels for Qwen3.5-27B inference on NVIDIA B200☆81Feb 28, 2026Updated 2 weeks ago
- Ship correct and fast LLM kernels to PyTorch☆145Jan 14, 2026Updated 2 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆29Jan 22, 2026Updated last month
- ☆65Apr 26, 2025Updated 10 months ago
- Effective transpose on Hopper GPU☆28Sep 6, 2025Updated 6 months ago
- ☆36Mar 7, 2025Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- cutile kernel examples☆39Feb 6, 2026Updated last month
- NVidia sass disassembler/inline patcher☆44Updated this week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆182Updated this week
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- ☆125Updated this week
- A Triton JIT runtime and ffi provider in C++☆32Updated this week
- ☆64Updated this week
- Awesome code, projects, books, etc. related to CUDA☆31Feb 3, 2026Updated last month
- study of cutlass☆22Nov 10, 2024Updated last year
- Official Repo For AAAI 2026 Accepted Paper "Rethinking the Spatio-Temporal Alignment of End-to-End 3D Perception"☆29Jan 13, 2026Updated 2 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆56Feb 6, 2026Updated last month