mz24cn / gemm_optimization
The repository targets the OpenCL gemm function performance optimization. It compares several libraries clBLAS, clBLAST, MIOpenGemm, Intel MKL(CPU) and cuBLAS(CUDA) on different matrix sizes/vendor's hardwares/OS. Out-of-the-box easy as MSVC, MinGW, Linux(CentOS) x86_64 binary provided. 在不同矩阵大小/硬件/操作系统下比较几个BLAS库的sgemm函数性能,提供binary,开盒即用。
☆14Updated 5 years ago
Related projects: ⓘ
- A Winograd Minimal Filter Implementation in CUDA☆20Updated 3 years ago
- flexible-gemm conv of deepcore☆17Updated 4 years ago
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Updated 5 years ago
- An extension library of WMMA API (Tensor Core API)☆81Updated 2 months ago
- This is a tuned sparse matrix dense vector multiplication(SpMV) library☆21Updated 8 years ago
- ☆38Updated 4 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆64Updated 5 years ago
- Yet another Polyhedra Compiler for DeepLearning☆19Updated last year
- CUDA Tensor Transpose (cuTT) library☆49Updated 7 years ago
- Subpart source code of of deepcore v0.7☆27Updated 4 years ago
- ☆39Updated 3 years ago
- CNNs in Halide☆22Updated 8 years ago
- ☆20Updated this week
- Winograd-based convolution implementation in OpenCL☆27Updated 7 years ago
- ☆17Updated 4 years ago
- study of cutlass☆18Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- Sparse-dense matrix-matrix multiplication on GPUs☆14Updated 5 years ago
- ☆34Updated 3 years ago
- ☆18Updated 5 months ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆17Updated 5 years ago
- ☆18Updated 2 years ago
- A GPU benchmark suite for assessing on-chip GPU memory bandwidth☆96Updated 7 years ago
- Common libraries for PPL projects☆28Updated last week
- Implementation of TSM2L and TSM2R -- High-Performance Tall-and-Skinny Matrix-Matrix Multiplication Algorithms for CUDA☆31Updated 4 years ago
- ☆73Updated 5 months ago
- ☆10Updated 4 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- ☆52Updated this week
- Learn OpenCL step by step.☆127Updated 2 years ago