SKKU-ESLAB / Auto-CompressionLinks
Automatic DNN compression tool with various model compression and neural architecture search techniques
☆21Updated 4 months ago
Alternatives and similar repositories for Auto-Compression
Users that are interested in Auto-Compression are comparing it to the libraries listed below
Sorting:
- arm compute library implementation of efficient low precision neural network☆25Updated 4 years ago
- ANT framework's model database that provides DNN models for the various range of IoT devices☆17Updated 2 months ago
- CNN functions for dense matrices resident in flash storage☆23Updated 5 years ago
- Virtual Connection: Framework for P2P Communication Abstraction☆23Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- [CVPR'20] ZeroQ Mixed-Precision implementation (unofficial): A Novel Zero Shot Quantization Framework☆14Updated 4 years ago
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆69Updated 3 years ago
- pytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"☆131Updated 5 years ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆98Updated 5 years ago
- ☆36Updated 6 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆58Updated 2 years ago
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆21Updated 6 years ago
- A pytorch implementation of DoReFa-Net☆132Updated 5 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆26Updated 4 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆50Updated last year
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆36Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- Code for "Fast Sparse ConvNets" CVPR2020 submissions☆13Updated 5 years ago
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆58Updated 6 years ago
- ☆47Updated 3 years ago
- ☆48Updated 5 years ago
- Enhanced version of IoT.js for ANT Framework - Platform for Internet of Things with JavaScript☆15Updated 4 years ago
- PyTorch implementation of Towards Efficient Training for Neural Network Quantization☆15Updated 5 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆41Updated 4 years ago
- Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm. In ECCV 2…☆186Updated 4 years ago
- An implementation of ResNet with mixup and cutout regularizations and soft filter pruning.☆16Updated 5 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Updated 7 years ago