mit-han-lab / tinyengineLinks
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
☆920Updated last year
Alternatives and similar repositories for tinyengine
Users that are interested in tinyengine are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆644Updated last year
- ☆1,064Updated 2 years ago
- On-Device Training Under 256KB Memory [NeurIPS'22]☆509Updated last year
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆440Updated last week
- CMSIS-NN Library☆357Updated last month
- ☆248Updated 2 years ago
- vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers☆600Updated 5 months ago
- Arm Machine Learning tutorials and examples☆479Updated this week
- This is a list of interesting papers and projects about TinyML.☆974Updated last month
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆430Updated last week
- A curated list of resources for embedded AI☆492Updated this week
- Open Neural Network Exchange to C compiler.☆351Updated 3 weeks ago
- TFLite model analyzer & memory optimizer☆135Updated last year
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆82Updated 4 years ago
- Pure C ONNX runtime with zero dependancies for embedded devices☆214Updated 2 years ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆862Updated 3 weeks ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆362Updated last year
- TinyChatEngine: On-Device LLM Inference Library☆939Updated last year
- AI Model Zoo for STM32 devices☆585Updated 3 weeks ago
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆645Updated 5 months ago
- generate tflite micro code which bypasses the interpreter (directly calls into kernels)☆82Updated 3 years ago
- Arm NN ML Software.☆1,292Updated last month
- An Open-Source Library for Training Binarized Neural Networks☆724Updated last year
- A parser, editor and profiler tool for ONNX models.☆473Updated 2 months ago
- TinyMaix is a tiny inference library for microcontrollers (TinyML).☆1,030Updated 11 months ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆452Updated 2 years ago
- ethos-u-vela is the ML model compiler tool and used to compile a TFLite-Micro model into an optimised version for ethos-u NPU on iMX93 pl…☆34Updated last month
- ☆340Updated 2 years ago
- ONNX Optimizer☆790Updated this week
- Open deep learning compiler stack for Kendryte AI accelerators ✨☆853Updated this week