mz24cn / gemm_optimization
The repository targets the OpenCL gemm function performance optimization. It compares several libraries clBLAS, clBLAST, MIOpenGemm, Intel MKL(CPU) and cuBLAS(CUDA) on different matrix sizes/vendor's hardwares/OS. Out-of-the-box easy as MSVC, MinGW, Linux(CentOS) x86_64 binary provided. 在不同矩阵大小/硬件/操作系统下比较几个BLAS库的sgemm函数性能,提供binary,开盒即用。
☆16Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for gemm_optimization
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Updated 5 years ago
- A Winograd Minimal Filter Implementation in CUDA☆23Updated 3 years ago
- An extension library of WMMA API (Tensor Core API)☆84Updated 4 months ago
- flexible-gemm conv of deepcore☆17Updated 4 years ago
- Subpart source code of of deepcore v0.7☆27Updated 4 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆66Updated 5 years ago
- study of cutlass☆19Updated last week
- Common libraries for PPL projects☆29Updated last month
- ☆17Updated 4 years ago
- ☆22Updated 7 months ago
- ☆59Updated this week
- ☆38Updated 4 years ago
- Yet another Polyhedra Compiler for DeepLearning☆19Updated last year
- ☆37Updated 3 years ago
- Optimize GEMM with tensorcore step by step☆15Updated 11 months ago
- This is a tuned sparse matrix dense vector multiplication(SpMV) library☆21Updated 8 years ago
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆83Updated 9 months ago
- FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme☆46Updated 2 months ago
- ☆17Updated 7 months ago
- A GPU benchmark suite for assessing on-chip GPU memory bandwidth☆99Updated 7 years ago
- ☆40Updated 3 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 7 years ago
- Dissecting NVIDIA GPU Architecture☆82Updated 2 years ago
- Collection of CUDA benchmarks, with a focus on unified vs. explicit memory management.☆20Updated 5 years ago
- HCC Sample Applications☆13Updated 7 years ago
- ☆93Updated 3 years ago
- ☆15Updated 10 months ago
- ☆80Updated 7 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago