peiswang / Two-Step-Quantization-AlexNetLinks
Two-Step Quantization on AlexNet
☆13Updated 6 years ago
Alternatives and similar repositories for Two-Step-Quantization-AlexNet
Users that are interested in Two-Step-Quantization-AlexNet are comparing it to the libraries listed below
Sorting:
- Caffe implementation of Optimal-Ternary-Weights-Approximation in "Two-Step Quantization for Low-bit Neural Networks" (CVPR2018).☆14Updated 6 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆41Updated 6 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆31Updated 5 years ago
- Caffe implementation of Dynamic Network Surgery and Incremental Network Quantization☆15Updated 7 years ago
- ☆45Updated 5 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆72Updated 6 years ago
- Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet☆46Updated 6 years ago
- This repo is re-produce for Channel_pruning☆11Updated 7 years ago
- ☆87Updated 6 years ago
- Some recent Quantizing techniques on PyTorch☆72Updated 5 years ago
- ☆46Updated 5 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 6 years ago
- ☆135Updated 6 years ago
- A pyCaffe implementaion of the 2017 ICLR's "Pruning Filters for Efficient ConvNets" publication☆43Updated 7 years ago
- [ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions, surpassing MobileNetV2☆102Updated 5 years ago
- Related Paper of Efficient Deep Neural Networks☆86Updated 4 years ago
- Diagonalwise Refactorization: An Efficient Training Method for Depthwise Convolutions (in Caffe)☆34Updated 7 years ago
- A script to convert floating-point CNN models into generalized low-precision ShiftCNN representation☆56Updated 7 years ago
- ☆57Updated 4 years ago
- KnowledgeDistillation Layer (Caffe implementation)☆89Updated 8 years ago
- Class Project for 18663 - Implementation of FBNet (Hardware-Aware DNAS)☆34Updated 5 years ago
- A tool to count operators and parameters of your MXNet-Gluon model.☆23Updated 5 years ago
- Dynamic Distribution Pruning for Efficient Network Architecture Search☆48Updated 6 years ago
- ☆46Updated 5 years ago
- Example for applying Gaussian and Laplace clipping on activations of CNN.☆34Updated 6 years ago
- ☆35Updated 5 years ago
- This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)☆19Updated 6 years ago
- Caffe implementation of accurate low-precision neural networks☆117Updated 6 years ago
- Simulate quantization and quantization aware training for MXNet-Gluon models.☆46Updated 5 years ago
- Android demo for dabnn☆20Updated 5 years ago