Bruce-Lee-LY / cutlass_gemmView external linksLinks
Multiple GEMM operators are constructed with cutlass to support LLM inference.
☆20Aug 3, 2025Updated 6 months ago
Alternatives and similar repositories for cutlass_gemm
Users that are interested in cutlass_gemm are comparing it to the libraries listed below
Sorting:
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Feb 9, 2026Updated last week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- ☆21Aug 14, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated 11 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- Awesome code, projects, books, etc. related to CUDA☆30Feb 3, 2026Updated 2 weeks ago
- SGEMM optimization with cuda step by step☆21Mar 23, 2024Updated last year
- A practical way of learning Swizzle☆36Feb 3, 2025Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- ☆12Mar 13, 2023Updated 2 years ago
- Use tensor core to calculate back-to-back HGEMM (half-precision general matrix multiplication) with MMA PTX instruction.☆13Nov 3, 2023Updated 2 years ago
- a simple API to use CUPTI☆11Aug 19, 2025Updated 5 months ago
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆10Jul 27, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- Kernel Library Wheel for SGLang☆17Updated this week
- ☕️ A vscode extension for netron, support *.pdmodel, *.nb, *.onnx, *.pb, *.h5, *.tflite, *.pth, *.pt, *.mnn, *.param, etc.☆14Jun 4, 2023Updated 2 years ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148May 10, 2025Updated 9 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆26Jan 22, 2026Updated 3 weeks ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆11Jun 10, 2024Updated last year
- Simple and efficient memory pool is implemented with C++11.☆10Jun 2, 2022Updated 3 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆145Aug 18, 2020Updated 5 years ago
- llama INT4 cuda inference with AWQ☆54Jan 20, 2025Updated last year
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆35Sep 15, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆18Feb 5, 2026Updated last week
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆23Sep 23, 2025Updated 4 months ago
- Stable Diffusion in TensorRT 8.5+☆15Mar 19, 2023Updated 2 years ago
- ☆62Updated this week
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆15Jun 1, 2023Updated 2 years ago
- deepstream + cuda,yolo26,yolo-master,yolo11,yolov8,sam,transformer, etc.☆35Feb 7, 2026Updated last week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- ☆261Jul 11, 2024Updated last year
- ☆165Feb 5, 2026Updated last week
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year