mit-han-lab / tiny-trainingLinks
On-Device Training Under 256KB Memory [NeurIPS'22]
☆511Updated last year
Alternatives and similar repositories for tiny-training
Users that are interested in tiny-training are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆648Updated last year
- [NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep L…☆922Updated last year
- ☆1,073Updated 2 years ago
- MLPerf® Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers☆443Updated 3 weeks ago
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆82Updated 5 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆364Updated last year
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆76Updated 2 years ago
- ☆248Updated 2 years ago
- ☆208Updated 4 years ago
- CMSIS-NN Library☆360Updated 2 weeks ago
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆310Updated last year
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆540Updated last year
- ☆26Updated 3 years ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆359Updated 2 years ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆862Updated last month
- A simple network quantization demo using pytorch from scratch.☆543Updated 2 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆453Updated 2 years ago
- Post-Training Quantization for Vision transformers.☆237Updated 3 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆432Updated last week
- Arm Machine Learning tutorials and examples☆480Updated 2 weeks ago
- TFLite model analyzer & memory optimizer☆135Updated 2 years ago
- This is a collection of our zero-cost NAS and efficient vision applications.☆448Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆266Updated 3 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆792Updated 10 months ago
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆114Updated 2 years ago
- ☆170Updated 2 years ago
- ☆292Updated last year
- PyTorch implementation for the APoT quantization (ICLR 2020)☆283Updated last year
- ☆244Updated 3 years ago