XiaoMi / nnlib
Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib
☆57Updated last year
Alternatives and similar repositories for nnlib:
Users that are interested in nnlib are comparing it to the libraries listed below
- symmetric int8 gemm☆66Updated 4 years ago
- Qualcomm Hexagon NN Offload Framework☆40Updated 4 years ago
- arm-neon☆89Updated 5 months ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆179Updated 6 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆67Updated 5 years ago
- Common libraries for PPL projects☆29Updated 3 months ago
- flexible-gemm conv of deepcore☆17Updated 5 years ago
- Tengine gemm tutorial, step by step☆11Updated 3 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 3 years ago
- Tencent NCNN with added CUDA support☆68Updated 4 years ago
- Subpart source code of of deepcore v0.7☆27Updated 4 years ago
- ☆94Updated 3 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 4 years ago
- TVM tutorial☆65Updated 6 years ago
- tutorial to optimize GEMM performance on android☆51Updated 8 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- 动手学习TVM核心原理教程☆59Updated 4 years ago
- TensorFlow and TVM integration☆37Updated 4 years ago
- VeriSilicon Tensor Interface Module☆229Updated 3 weeks ago
- mperf是一个面向移动/ 嵌入式平台的算子性能调优工具箱☆175Updated last year
- Image processing library for learning purpose☆53Updated last month
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆79Updated last year
- MegEngine到其他框架的转换器☆69Updated last year
- arm neon 相关文档和指令意义☆241Updated 5 years ago
- Acuity Model Zoo☆136Updated 2 years ago
- heterogeneity-aware-lowering-and-optimization☆254Updated last year
- A stub opecl library that dynamically dlopen/dlsyms opencl implementations at runtime based on environment variables. Will be useful when…☆70Updated 10 months ago
- DDK for Rockchip NPU☆62Updated 4 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆14Updated 2 years ago