hma02 / cublasHgemm-P100
Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm
☆34Updated 5 years ago
Alternatives and similar repositories for cublasHgemm-P100
Users that are interested in cublasHgemm-P100 are comparing it to the libraries listed below
Sorting:
- Subpart source code of of deepcore v0.7☆27Updated 4 years ago
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 7 years ago
- flexible-gemm conv of deepcore☆17Updated 5 years ago
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Updated 6 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆191Updated 6 years ago
- Fast CUDA Kernels for ResNet Inference.☆174Updated 5 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 8 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 4 years ago
- symmetric int8 gemm☆66Updated 4 years ago
- Efficient Top-K implementation on the GPU☆178Updated 6 years ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- tophub autotvm log collections☆69Updated 2 years ago
- ☆24Updated 7 years ago
- TensorFlow and TVM integration☆37Updated 5 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated 2 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆70Updated 8 years ago
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆84Updated last year
- Caffe for Sparse Convolutional Neural Network☆237Updated 2 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- 动手学习TVM核心原理教程☆61Updated 4 years ago
- A way to use cuda to accelerate top k algorithm☆29Updated 7 years ago
- Tengine gemm tutorial, step by step☆13Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated 2 years ago
- heterogeneity-aware-lowering-and-optimization☆254Updated last year
- Library for fast image convolution in neural networks on Intel Architecture☆29Updated 7 years ago
- ☆69Updated 2 years ago
- GPU implementation of Winograd convolution☆10Updated 7 years ago
- ☆26Updated 8 years ago