jundaf2 / eigenMHALinks
Forward and backward Attention DNN operators implementationed by LibTorch, cuDNN, and Eigen.
☆30Updated 2 years ago
Alternatives and similar repositories for eigenMHA
Users that are interested in eigenMHA are comparing it to the libraries listed below
Sorting:
- Benchmark code for the "Online normalizer calculation for softmax" paper☆98Updated 7 years ago
- A Winograd Minimal Filter Implementation in CUDA☆28Updated 4 years ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆64Updated 11 months ago
- ☆40Updated last year
- Swin Transformer C++ Implementation☆63Updated 4 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆139Updated 5 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆16Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- cuDNN sample codes provided by Nvidia☆46Updated 6 years ago
- ☆114Updated last year
- ☆131Updated 8 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 7 months ago
- ☆229Updated last year
- CUDA Matrix Multiplication Optimization☆218Updated last year
- Step-by-step optimization of CUDA SGEMM☆370Updated 3 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆71Updated 6 years ago
- ☆59Updated 9 months ago
- Examples of CUDA implementations by Cutlass CuTe☆222Updated 2 months ago
- A library of GPU kernels for sparse matrix operations.☆271Updated 4 years ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆46Updated last year
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆129Updated last week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆465Updated 11 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆74Updated last year
- CUDA Templates for Linear Algebra Subroutines☆100Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆107Updated 3 months ago
- An extension library of WMMA API (Tensor Core API)☆103Updated last year
- play gemm with tvm☆91Updated 2 years ago
- ☆151Updated 8 months ago
- ☆68Updated 7 months ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆373Updated 8 months ago