mit-han-lab / tiny-trainingLinks
On-Device Training Under 256KB Memory [NeurIPS'22]
☆497Updated last year
Alternatives and similar repositories for tiny-training
Users that are interested in tiny-training are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆619Updated last year
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆904Updated 11 months ago
- ☆1,011Updated last year
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆360Updated last year
- ☆242Updated 2 years ago
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆432Updated 2 months ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆353Updated 2 years ago
- ☆207Updated 4 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆424Updated this week
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆81Updated 4 years ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆854Updated 2 months ago
- ☆25Updated 3 years ago
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆73Updated 2 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆451Updated 2 years ago
- CMSIS-NN Library☆324Updated last month
- Post-Training Quantization for Vision transformers.☆232Updated 3 years ago
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆309Updated last year
- PyTorch implementation for the APoT quantization (ICLR 2020)☆280Updated 11 months ago
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆638Updated 3 months ago
- ☆163Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆285Updated 4 years ago
- TFLite model analyzer & memory optimizer☆132Updated last year
- ☆243Updated 3 years ago
- A simple network quantization demo using pytorch from scratch.☆539Updated 2 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆260Updated 2 years ago
- Unofficial implementation of LSQ-Net, a neural network quantization framework☆304Updated last year
- ☆282Updated last year
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆112Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 3 years ago