microsoft / nn-Meter
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
☆345Updated 7 months ago
Alternatives and similar repositories for nn-Meter:
Users that are interested in nn-Meter are comparing it to the libraries listed below
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆429Updated last year
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆380Updated 4 years ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆277Updated last year
- Model Quantization Benchmark☆795Updated 2 months ago
- Quantization of Convolutional Neural networks.☆244Updated 7 months ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆271Updated 3 months ago
- Pytorch implementation of BRECQ, ICLR 2021☆269Updated 3 years ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆261Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆197Updated 2 years ago
- Measuring and predicting on-device metrics (latency, power, etc.) of machine learning models☆66Updated last year
- ☆226Updated 2 years ago
- [CVPR 2020] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy☆157Updated 4 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆440Updated last year
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆507Updated 6 months ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆96Updated 3 years ago
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆110Updated last year
- ☆225Updated 3 years ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆329Updated last year
- A simple network quantization demo using pytorch from scratch.☆522Updated last year
- ☆141Updated 2 years ago
- ☆35Updated 2 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆201Updated 2 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆315Updated 5 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆428Updated last year
- Summary, Code for Deep Neural Network Quantization☆547Updated 5 months ago
- Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design☆160Updated 4 years ago
- ☆202Updated 3 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆113Updated last year
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆115Updated last year
- Unofficial implementation of LSQ-Net, a neural network quantization framework☆289Updated 10 months ago