SKKU-ESLAB / Auto-CompressionLinks
Automatic DNN compression tool with various model compression and neural architecture search techniques
☆21Updated 7 months ago
Alternatives and similar repositories for Auto-Compression
Users that are interested in Auto-Compression are comparing it to the libraries listed below
Sorting:
- arm compute library implementation of efficient low precision neural network☆25Updated 5 years ago
- CNN functions for dense matrices resident in flash storage☆23Updated 6 years ago
- ANT framework's model database that provides DNN models for the various range of IoT devices☆17Updated last month
- Virtual Connection: Framework for P2P Communication Abstraction☆23Updated 5 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆50Updated last year
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆68Updated 4 years ago
- PyTorch implementation of EdMIPS: https://arxiv.org/pdf/2004.05795.pdf☆60Updated 5 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- PyTorch implementation of Towards Efficient Training for Neural Network Quantization☆16Updated 5 years ago
- [CVPR'20] ZeroQ Mixed-Precision implementation (unofficial): A Novel Zero Shot Quantization Framework☆14Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆35Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆279Updated last year
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆56Updated 6 years ago
- ☆36Updated 6 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 3 years ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆281Updated 11 months ago
- Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design☆160Updated 4 years ago
- ☆49Updated 3 years ago
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆398Updated 4 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆25Updated 5 years ago
- Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm. In ECCV 2…☆186Updated 4 years ago
- source code of the paper: Robust Quantization: One Model to Rule Them All☆40Updated 2 years ago
- [CVPR 2020] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy☆159Updated 5 years ago
- ☆47Updated 5 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 5 years ago
- Enhanced version of IoT.js for ANT Framework - Platform for Internet of Things with JavaScript☆15Updated 4 years ago