benja263 / Integer-Only-Inference-for-Deep-Learning-in-Native-CLinks
Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.
☆24Updated 3 years ago
Alternatives and similar repositories for Integer-Only-Inference-for-Deep-Learning-in-Native-C
Users that are interested in Integer-Only-Inference-for-Deep-Learning-in-Native-C are comparing it to the libraries listed below
Sorting:
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆149Updated 2 weeks ago
- This is the open-source version of TinyTS. The code is dirty so far. We may clean the code in the future.☆17Updated last year
- Floating-Point Optimized On-Device Learning Library for the PULP Platform.☆35Updated 2 months ago
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆88Updated 2 years ago
- muRISCV-NN is a collection of efficient deep learning kernels for embedded platforms and microcontrollers.☆82Updated last month
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆82Updated 5 months ago
- ☆31Updated 2 years ago
- CSV spreadsheets and other material for AI accelerator survey papers☆172Updated last year
- ☆29Updated 4 years ago
- Low Precision(quantized) Yolov5☆41Updated 3 months ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆43Updated 5 years ago
- Tool for the deployment and analysis of TinyML applications on TFLM and MicroTVM backends☆35Updated last week
- This project contains a code generator that produces static C NN inference deployment code targeting tiny micro-controllers (TinyML) as r…☆30Updated 3 years ago
- A library to train and deploy quantised Deep Neural Networks☆24Updated 6 months ago
- [ICCAD'22 TinyML Contest] Efficient Heart Stroke Detection on Low-cost Microcontrollers☆14Updated 2 years ago
- Machine-Learning Accelerator System Exploration Tools☆171Updated last month
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆84Updated 2 years ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆132Updated 5 months ago
- ☆37Updated last year
- The official, proof-of-concept C++ implementation of PocketNN.☆34Updated last year
- The Riallto Open Source Project from AMD☆81Updated 3 months ago
- Open Source Compiler Framework using ONNX as Frontend and IR☆32Updated 2 years ago
- Curated content for DNN approximation, acceleration ... with a focus on hardware accelerator and deployment☆27Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 7 months ago
- ☆23Updated 10 months ago
- INT-Q Extension of the CMSIS-NN library for ARM Cortex-M target☆18Updated 5 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆48Updated last year
- ☆153Updated 2 years ago
- SAMO: Streaming Architecture Mapping Optimisation☆33Updated last year
- An optimized neural network operator library for chips base on Xuantie CPU.☆90Updated last year