leimao / CUDA-GEMM-Optimization
CUDA Matrix Multiplication Optimization
☆118Updated 2 months ago
Related projects: ⓘ
- A Easy-to-understand TensorOp Matmul Tutorial☆265Updated this week
- ☆138Updated 2 months ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆109Updated 4 years ago
- Step-by-step optimization of CUDA SGEMM☆207Updated 2 years ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆264Updated last week
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆114Updated last week
- collection of benchmarks to measure basic GPU capabilities☆241Updated 2 months ago
- A simple high performance CUDA GEMM implementation.☆319Updated 8 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆40Updated last week
- ☆73Updated 5 months ago
- An extension library of WMMA API (Tensor Core API)☆81Updated 2 months ago
- ☆69Updated 6 months ago
- ☆134Updated last year
- Shared Middle-Layer for Triton Compilation☆157Updated last week
- play gemm with tvm☆81Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆91Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆81Updated 2 months ago
- ☆95Updated 2 years ago
- Training material for Nsight developer tools☆123Updated last month
- Instructions, Docker images, and examples for Nsight Compute and Nsight Systems☆126Updated 4 years ago
- ☆133Updated 2 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆52Updated 6 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆143Updated last month
- ☆100Updated 5 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆173Updated 3 months ago
- ☆67Updated last week
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆129Updated last year
- ☆151Updated 2 weeks ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆79Updated last year