benja263 / Integer-Only-Inference-for-Deep-Learning-in-Native-C
Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.
☆23Updated 3 years ago
Alternatives and similar repositories for Integer-Only-Inference-for-Deep-Learning-in-Native-C
Users that are interested in Integer-Only-Inference-for-Deep-Learning-in-Native-C are comparing it to the libraries listed below
Sorting:
- Floating-Point Optimized On-Device Learning Library for the PULP Platform.☆34Updated last week
- This is the open-source version of TinyTS. The code is dirty so far. We may clean the code in the future.☆16Updated 10 months ago
- ☆36Updated last year
- Tool for the deployment and analysis of TinyML applications on TFLM and MicroTVM backends☆34Updated this week
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆80Updated 3 months ago
- A Toy-Purpose TPU Simulator☆18Updated 11 months ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆148Updated this week
- A library to train and deploy quantised Deep Neural Networks☆24Updated 4 months ago
- This repository contains the results and code for the MLPerf™ Tiny Inference v0.7 benchmark.☆17Updated last year
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆86Updated 2 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆41Updated 5 years ago
- Curated content for DNN approximation, acceleration ... with a focus on hardware accelerator and deployment☆25Updated last year
- muRISCV-NN is a collection of efficient deep learning kernels for embedded platforms and microcontrollers.☆77Updated 2 months ago
- Low Precision(quantized) Yolov5☆37Updated last month
- A Plug-and-play Lightweight tool for the Inference Optimization of Deep Neural networks☆41Updated last week
- ☆30Updated 2 years ago
- 关于深度学习算法、框架、编译器、加速器的一些理解☆15Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆48Updated last year
- ☆146Updated 2 years ago
- LCAI-TIHU SW is a software stack of the AI inference processor based on RISC-V☆23Updated 2 years ago
- TensorCore Vector Processor for Deep Learning - Google Summer of Code Project☆21Updated 3 years ago
- ☆42Updated 3 weeks ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆52Updated last month
- [ICCAD'22 TinyML Contest] Efficient Heart Stroke Detection on Low-cost Microcontrollers☆14Updated 2 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 5 months ago
- Torch2Chip (MLSys, 2024)☆51Updated last month
- Learn NVDLA by SOMNIA☆33Updated 5 years ago
- A survey on Hardware Accelerated LLMs☆51Updated 4 months ago
- Example for running IREE in a bare-metal Arm environment.☆33Updated 2 months ago
- ☆44Updated 5 years ago