tpoisonooo / chgemmLinks
symmetric int8 gemm
☆66Updated 5 years ago
Alternatives and similar repositories for chgemm
Users that are interested in chgemm are comparing it to the libraries listed below
Sorting:
- ☆97Updated 3 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆58Updated 2 years ago
- arm-neon☆90Updated 11 months ago
- 动手学习TVM核心原理教程☆62Updated 4 years ago
- Common libraries for PPL projects☆29Updated 4 months ago
- Tencent NCNN with added CUDA support☆69Updated 4 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- My learning notes about AI, including Machine Learning and Deep Learning.☆18Updated 6 years ago
- mperf是一个面向移动/嵌入式平台的算子性能调优工具箱☆186Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- ☆17Updated last year
- ☆37Updated 9 months ago
- Tengine gemm tutorial, step by step☆13Updated 4 years ago
- quantize aware training package for NCNN on pytorch☆69Updated 3 years ago
- heterogeneity-aware-lowering-and-optimization☆255Updated last year
- ☆21Updated 4 years ago
- Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.☆93Updated 3 years ago
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆123Updated 3 weeks ago
- Qualcomm Hexagon NN Offload Framework☆43Updated 4 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 5 years ago
- ☆23Updated 2 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- OneFlow->ONNX☆43Updated 2 years ago
- This is an implementation of sgemm_kernel on L1d cache.☆229Updated last year
- ☆10Updated 4 years ago
- TVM tutorial☆66Updated 6 years ago
- arm neon 相关文档和指令意义☆243Updated 6 years ago
- Fast CUDA Kernels for ResNet Inference.☆176Updated 6 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago