tum-ei-eda / utvm_staticrt_codegenLinks
This project contains a code generator that produces static C NN inference deployment code targeting tiny micro-controllers (TinyML) as replacement for other µTVM runtimes. This tools generates a runtime, which statically executes the compiled model. This reduces the overhead in terms of code size and execution time compared to having a dynamic …
☆30Updated 3 years ago
Alternatives and similar repositories for utvm_staticrt_codegen
Users that are interested in utvm_staticrt_codegen are comparing it to the libraries listed below
Sorting:
- ☆31Updated 2 years ago
- Tool for the deployment and analysis of TinyML applications on TFLM and MicroTVM backends☆35Updated last week
- muRISCV-NN is a collection of efficient deep learning kernels for embedded platforms and microcontrollers.☆82Updated last month
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆88Updated 2 years ago
- ☆29Updated 4 years ago
- Conversions to MLIR EmitC☆129Updated 7 months ago
- IREE plugin repository for the AMD AIE accelerator☆98Updated this week
- ☆82Updated last year
- This is the open-source version of TinyTS. The code is dirty so far. We may clean the code in the future.☆17Updated 11 months ago
- An optimized neural network operator library for chips base on Xuantie CPU.☆90Updated last year
- ☆102Updated this week
- ☆44Updated 5 years ago
- Example for running IREE in a bare-metal Arm environment.☆36Updated 4 months ago
- A translator from c to MLIR☆28Updated 3 years ago
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆82Updated 5 months ago
- Learn NVDLA by SOMNIA☆33Updated 5 years ago
- ☆14Updated 3 years ago
- An MLIR-based toy DL compiler for TVM Relay.☆58Updated 2 years ago
- ☆37Updated last year
- CSV spreadsheets and other material for AI accelerator survey papers☆172Updated last year
- EEMBC's Machine-Learning Inference Benchmark targeted at edge devices.☆49Updated 3 years ago
- LCAI-TIHU SW is a software stack of the AI inference processor based on RISC-V☆23Updated 2 years ago
- ☆17Updated 5 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆43Updated 5 years ago
- Sandbox for TVM and playing around!☆22Updated 2 years ago
- ☆25Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆48Updated last year
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆149Updated 2 weeks ago
- ☆92Updated last year
- A home for the final text of all TVM RFCs.☆105Updated 9 months ago