Roll920 / ThiNet_Code
Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet
☆46Updated 6 years ago
Alternatives and similar repositories for ThiNet_Code:
Users that are interested in ThiNet_Code are comparing it to the libraries listed below
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆71Updated 6 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆146Updated 6 years ago
- Code for IJCAI2019 paper☆46Updated 5 years ago
- Apply the pruning strategy for MobileNet_v2☆51Updated 5 years ago
- ☆87Updated 6 years ago
- ☆45Updated 5 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆166Updated 3 years ago
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 4 years ago
- ☆134Updated 6 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆42Updated 6 years ago
- A pyCaffe implementaion of the 2017 ICLR's "Pruning Filters for Efficient ConvNets" publication☆43Updated 6 years ago
- KnowledgeDistillation Layer (Caffe implementation)☆89Updated 7 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆31Updated 5 years ago
- ☆45Updated 5 years ago
- Two-Step Quantization on AlexNet☆13Updated 6 years ago
- CNN channel pruning, LeGR, MorphNet, AMC. Codebase for paper "LeGR: Filter Pruning via Learned Global Ranking"☆114Updated 4 years ago
- Code for “Discrimination-aware-Channel-Pruning-for-Deep-Neural-Networks”☆184Updated 4 years ago
- ☆38Updated 6 years ago
- alibabacloud-quantization-networks☆121Updated 5 years ago
- [ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions, surpassing MobileNetV2☆102Updated 4 years ago
- Model shared. Top1:67.408/Top5:87.258☆48Updated 6 years ago
- ☆55Updated 4 years ago
- Tensorflow codes for "Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers"☆30Updated 5 years ago
- ☆66Updated 5 years ago
- deep learning model compression based on keras☆32Updated 6 years ago
- Implementation of NetAdapt☆30Updated 4 years ago
- Caffe implementation of Optimal-Ternary-Weights-Approximation in "Two-Step Quantization for Low-bit Neural Networks" (CVPR2018).☆14Updated 6 years ago
- ☆120Updated 4 years ago