mit-han-lab / tiny-trainingLinks
On-Device Training Under 256KB Memory [NeurIPS'22]
☆503Updated last year
Alternatives and similar repositories for tiny-training
Users that are interested in tiny-training are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆633Updated last year
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆913Updated last year
- ☆1,038Updated 2 years ago
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆437Updated 3 weeks ago
- ☆245Updated 2 years ago
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆82Updated 4 years ago
- ☆208Updated 4 years ago
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆75Updated 2 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆360Updated last year
- CMSIS-NN Library☆345Updated last week
- ☆26Updated 3 years ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆354Updated 2 years ago
- Arm Machine Learning tutorials and examples☆478Updated last week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆428Updated this week
- List of papers related to neural network quantization in recent AI conferences and journals.☆771Updated 8 months ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆281Updated last year
- Post-Training Quantization for Vision transformers.☆235Updated 3 years ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆861Updated 4 months ago
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆310Updated last year
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆452Updated 2 years ago
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆113Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆265Updated 2 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆642Updated 4 months ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆540Updated last year
- ☆168Updated 2 years ago
- TFLite model analyzer & memory optimizer☆135Updated last year
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆93Updated 3 years ago
- Open Neural Network Exchange to C compiler.☆344Updated this week
- A simple network quantization demo using pytorch from scratch.☆541Updated 2 years ago