CAS-CLab / Optimal-Ternary-Weights-Approximation
Caffe implementation of Optimal-Ternary-Weights-Approximation in "Two-Step Quantization for Low-bit Neural Networks" (CVPR2018).
☆14Updated 6 years ago
Alternatives and similar repositories for Optimal-Ternary-Weights-Approximation:
Users that are interested in Optimal-Ternary-Weights-Approximation are comparing it to the libraries listed below
- Two-Step Quantization on AlexNet☆13Updated 6 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆42Updated 6 years ago
- ☆45Updated 5 years ago
- Diagonalwise Refactorization: An Efficient Training Method for Depthwise Convolutions (in Caffe)☆34Updated 7 years ago
- Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet☆46Updated 6 years ago
- ☆87Updated 6 years ago
- [ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions, surpassing MobileNetV2☆102Updated 4 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆71Updated 6 years ago
- a simplify and accuracy-maintain model of ResNet-50 by the Invert Residual Construction☆38Updated 6 years ago
- ☆68Updated 5 years ago
- caffe implement☆23Updated 3 years ago
- Caffe implementation of Dynamic Network Surgery and Incremental Network Quantization☆15Updated 7 years ago
- ☆134Updated 6 years ago
- Code for IJCAI2019 paper☆46Updated 5 years ago
- A tool to count operators and parameters of your MXNet-Gluon model.☆21Updated 4 years ago
- KnowledgeDistillation Layer (Caffe implementation)☆89Updated 7 years ago
- A Gluon implementation of Mnasnet☆59Updated 6 years ago
- This repo is re-produce for Channel_pruning☆11Updated 6 years ago
- ShuffleNetV1 &ShuffleNetV2 implementation in mxnet☆18Updated 6 years ago
- Apply the pruning strategy for MobileNet_v2☆51Updated 5 years ago
- Implementations of CVPR 2019 paper Distilling Object Detectors with Fine-grained Feature Imitation☆27Updated 2 years ago
- Pytorch code for paper: Learning Versatile Filters for Efficient Convolutional Neural Networks (NeurIPS 2018)