Arm-China / Compass_Optimizer
Compass Optimizer (OPT for short), is part of the Zhouyi Compass Neural Network Compiler. The OPT is designed for converting the float Intermediate Representation (IR) generated by the Compass Unified Parser to an optimized quantized or mixed IR which is suited for Zhouyi NPU hardware platforms.
☆27Updated 3 months ago
Alternatives and similar repositories for Compass_Optimizer:
Users that are interested in Compass_Optimizer are comparing it to the libraries listed below
- armchina NPU parser☆38Updated 3 months ago
- armchina NPU Integration☆22Updated last year
- code reading for tvm☆76Updated 3 years ago
- Zhouyi model zoo☆98Updated 7 months ago
- armchina NPU driver☆51Updated 3 months ago
- ☆30Updated 2 years ago
- examples for tvm schedule API☆101Updated last year
- ☆29Updated last week
- Aiming at an AI Chip based on RISC-V and NVDLA.☆20Updated 7 years ago
- CUDA PTX-ISA Document 中文翻译版☆38Updated last month
- tophub autotvm log collections☆69Updated 2 years ago
- TVM tutorial☆66Updated 6 years ago
- ☆44Updated 5 years ago
- play gemm with tvm☆91Updated last year
- ☆17Updated 4 years ago
- VeriSilicon Tensor Interface Module☆234Updated 4 months ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆108Updated 2 years ago
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆116Updated last month
- 动手学习TVM核心原理教程☆61Updated 4 years ago
- ☆19Updated last month
- Compass Apache TVM is enhanced based on the Apache TVM for wide range of Neural Network (NN) models quick support, optimization and heter…☆18Updated 2 months ago
- A set of examples around MegEngine☆31Updated last year
- An optimized neural network operator library for chips base on Xuantie CPU.☆89Updated 10 months ago
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- ☆148Updated 3 months ago
- ☆139Updated 4 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- ☆13Updated 5 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago
- ☆95Updated this week