jundaf2 / eigenMHA
Forward and backward Attention DNN operators implementationed by LibTorch, cuDNN, and Eigen.
☆27Updated last year
Related projects: ⓘ
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆40Updated 2 weeks ago
- ☆70Updated 6 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆266Updated 2 weeks ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆13Updated last year
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆15Updated last week
- Step-by-step optimization of CUDA SGEMM☆207Updated 2 years ago
- A simple high performance CUDA GEMM implementation.☆319Updated 8 months ago
- ☆138Updated 2 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆265Updated this week
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆265Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆81Updated 2 months ago
- Swin Transformer C++ Implementation☆53Updated 3 years ago
- ☆48Updated 2 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆109Updated 4 years ago
- An extension library of WMMA API (Tensor Core API)☆81Updated 2 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆22Updated last year
- ☆127Updated 2 months ago
- play gemm with tvm☆81Updated last year
- Yinghan's Code Sample☆272Updated 2 years ago
- ☆77Updated last year
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆114Updated last week
- ☆15Updated last week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- A tutorial for CUDA&PyTorch☆110Updated last week
- ☆73Updated 5 months ago
- ☆18Updated 5 months ago
- ☆95Updated 2 years ago
- A Winograd Minimal Filter Implementation in CUDA☆20Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆93Updated last week
- ☆134Updated last year