hma02 / cublasHgemm-P100
Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm
☆34Updated 5 years ago
Alternatives and similar repositories for cublasHgemm-P100:
Users that are interested in cublasHgemm-P100 are comparing it to the libraries listed below
- Subpart source code of of deepcore v0.7☆27Updated 4 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆68Updated 5 years ago
- flexible-gemm conv of deepcore☆17Updated 5 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 4 years ago
- symmetric int8 gemm☆66Updated 4 years ago
- 动手学习TVM核心原理教程☆60Updated 4 years ago
- heterogeneity-aware-lowering-and-optimization☆254Updated last year
- Efficient Top-K implementation on the GPU☆155Updated 5 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆180Updated 6 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆197Updated 2 years ago
- tophub autotvm log collections☆70Updated 2 years ago
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆190Updated 5 years ago
- TVM tutorial☆65Updated 6 years ago
- Fast CUDA Kernels for ResNet Inference.☆172Updated 5 years ago
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Updated 5 years ago
- Place for meetup slides☆140Updated 4 years ago
- TVM learning and research☆12Updated 4 years ago
- ☆95Updated 3 years ago
- Tengine gemm tutorial, step by step☆12Updated 4 years ago
- A way to use cuda to accelerate top k algorithm☆29Updated 7 years ago
- TensorFlow and TVM integration☆37Updated 4 years ago
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 7 years ago
- ☆10Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- examples for tvm schedule API☆99Updated last year
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated last year
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- Caffe for Sparse Convolutional Neural Network☆238Updated 2 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆82Updated last year
- To make it easy to benchmark AI accelerators☆183Updated 2 years ago