mit-han-lab / tiny-trainingLinks
On-Device Training Under 256KB Memory [NeurIPS'22]
☆485Updated last year
Alternatives and similar repositories for tiny-training
Users that are interested in tiny-training are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆594Updated last year
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆888Updated 9 months ago
- ☆951Updated last year
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆357Updated last year
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆424Updated last week
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆81Updated 4 years ago
- ☆238Updated 2 years ago
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆70Updated last year
- TFLite model analyzer & memory optimizer☆129Updated last year
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆347Updated 2 years ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆842Updated last week
- ☆206Updated 3 years ago
- ☆25Updated 3 years ago
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆309Updated 11 months ago
- Post-Training Quantization for Vision transformers.☆224Updated 3 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆412Updated last month
- This is a collection of our zero-cost NAS and efficient vision applications.☆428Updated 2 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆444Updated 2 years ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆532Updated 11 months ago
- Arm Machine Learning tutorials and examples☆469Updated last month
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆111Updated 2 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆44Updated 5 years ago
- A library for researching neural networks compression and acceleration methods.☆139Updated last year
- A simple network quantization demo using pytorch from scratch.☆534Updated 2 years ago
- [ICCAD'22 TinyML Contest] Efficient Heart Stroke Detection on Low-cost Microcontrollers☆14Updated 2 years ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆277Updated 8 months ago
- vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers☆597Updated last month
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆628Updated 3 weeks ago
- This is a list of interesting papers and projects about TinyML.☆907Updated last week
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆126Updated 2 years ago