mit-han-lab / tinyengineLinks
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
☆888Updated 8 months ago
Alternatives and similar repositories for tinyengine
Users that are interested in tinyengine are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆594Updated last year
- ☆951Updated last year
- On-Device Training Under 256KB Memory [NeurIPS'22]☆484Updated last year
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆423Updated this week
- vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers☆597Updated last month
- CMSIS-NN Library☆303Updated 2 weeks ago
- ☆238Updated 2 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆411Updated last month
- This is a list of interesting papers and projects about TinyML.☆903Updated 3 weeks ago
- Arm Machine Learning tutorials and examples☆470Updated last month
- A curated list of resources for embedded AI☆451Updated 3 weeks ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆842Updated this week
- TFLite model analyzer & memory optimizer☆129Updated last year
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆81Updated 4 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆357Updated last year
- ☆333Updated last year
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆627Updated 3 weeks ago
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,276Updated 2 weeks ago
- Open Neural Network Exchange to C compiler.☆310Updated last week
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,432Updated this week
- generate tflite micro code which bypasses the interpreter (directly calls into kernels)☆81Updated 3 years ago
- TinyChatEngine: On-Device LLM Inference Library☆884Updated last year
- AI Model Zoo for STM32 devices☆465Updated 3 weeks ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆531Updated 11 months ago
- Pure C ONNX runtime with zero dependancies for embedded devices☆210Updated last year
- Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digit…☆2,450Updated this week
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,925Updated last year
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆442Updated 2 years ago
- An Open-Source Library for Training Binarized Neural Networks☆719Updated last year
- Open deep learning compiler stack for Kendryte AI accelerators ✨☆812Updated this week