Arm-China / Compass_OptimizerLinks
Compass Optimizer (OPT for short), is part of the Zhouyi Compass Neural Network Compiler. The OPT is designed for converting the float Intermediate Representation (IR) generated by the Compass Unified Parser to an optimized quantized or mixed IR which is suited for Zhouyi NPU hardware platforms.
☆30Updated 2 months ago
Alternatives and similar repositories for Compass_Optimizer
Users that are interested in Compass_Optimizer are comparing it to the libraries listed below
Sorting:
- armchina NPU parser☆41Updated 2 months ago
- code reading for tvm☆76Updated 3 years ago
- examples for tvm schedule API☆101Updated 2 years ago
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆143Updated last week
- tophub autotvm log collections☆69Updated 2 years ago
- Development repository for the Triton-Linalg conversion☆206Updated 10 months ago
- Aiming at an AI Chip based on RISC-V and NVDLA.☆21Updated 7 years ago
- ☆17Updated 5 years ago
- ☆33Updated 2 years ago
- A home for the final text of all TVM RFCs.☆108Updated last year
- heterogeneity-aware-lowering-and-optimization☆257Updated last year
- VeriSilicon Tensor Interface Module☆245Updated 3 weeks ago
- TVM tutorial☆66Updated 6 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆119Updated 3 years ago
- This fork of BVLC/Caffe is dedicated to supporting Cambricon deep learning processor and improving performance of this deep learning fram…☆41Updated 5 years ago
- a tensor computing compiler based tile programming for gpu, cpu or tpu☆45Updated 3 months ago
- ☆156Updated 11 months ago
- 动手学习TVM核心原理教程☆63Updated 5 years ago
- CUDA PTX-ISA Document 中文翻译版☆47Updated 2 months ago
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆180Updated 3 years ago
- ☆11Updated 2 years ago
- ☆145Updated last year
- ☆19Updated 3 weeks ago
- ☆152Updated 11 months ago
- play gemm with tvm☆92Updated 2 years ago
- armchina NPU Integration☆24Updated 2 months ago
- ☆41Updated 3 years ago
- CSV spreadsheets and other material for AI accelerator survey papers☆185Updated 3 weeks ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago
- armchina NPU driver☆58Updated 2 months ago