waau / qualcomm-nnlib
Qualcomm Hexagon NN Offload Framework
☆39Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for qualcomm-nnlib
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆54Updated last year
- symmetric int8 gemm☆66Updated 4 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆202Updated 3 years ago
- Benchmark of TVM quantized model on CUDA☆112Updated 4 years ago
- TVM tutorial☆65Updated 5 years ago
- Tengine gemm tutorial, step by step☆11Updated 3 years ago
- ☆67Updated last year
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆179Updated 6 years ago
- tophub autotvm log collections☆70Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆64Updated 5 years ago
- TFLite python API package for parsing TFLite model☆12Updated 4 years ago
- TensorFlow and TVM integration☆38Updated 4 years ago
- MegEngine到其他框架的转换器☆67Updated last year
- Yet another Polyhedra Compiler for DeepLearning☆19Updated last year
- Benchmark scripts for TVM☆73Updated 2 years ago
- ☆93Updated 3 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated last year
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 4 years ago
- ☆10Updated 4 years ago
- quantize aware training package for NCNN on pytorch☆68Updated 3 years ago
- Common libraries for PPL projects☆29Updated 3 weeks ago
- My learning notes about AI, including Machine Learning and Deep Learning.☆18Updated 5 years ago
- 动手学习TVM核心原理教程☆59Updated 3 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- Inference of quantization aware trained networks using TensorRT☆78Updated last year
- ☆34Updated 2 years ago