agoncharenko1992 / FAT-fast-adjustable-thresholdLinks
This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)
☆19Updated 6 years ago
Alternatives and similar repositories for FAT-fast-adjustable-threshold
Users that are interested in FAT-fast-adjustable-threshold are comparing it to the libraries listed below
Sorting:
- Class Project for 18663 - Implementation of FBNet (Hardware-Aware DNAS)☆34Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆72Updated 6 years ago
- Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet☆46Updated 7 years ago
- Some recent Quantizing techniques on PyTorch☆72Updated 6 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆41Updated 7 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆168Updated 4 years ago
- ☆88Updated 7 years ago
- ☆39Updated 7 years ago
- ☆136Updated 6 years ago
- ☆46Updated 6 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆31Updated 6 years ago
- ☆45Updated 6 years ago
- ☆67Updated 5 years ago
- ☆214Updated 6 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆126Updated 7 years ago
- Paper list on model compression and acceleration☆26Updated 6 years ago
- Network acceleration methods☆177Updated 4 years ago
- Two-Step Quantization on AlexNet☆13Updated 7 years ago
- Mobilenet model converted from tensorflow☆48Updated 7 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 6 years ago
- Caffe implementation of Optimal-Ternary-Weights-Approximation in "Two-Step Quantization for Low-bit Neural Networks" (CVPR2018).☆14Updated 7 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 7 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- Code for “Discrimination-aware-Channel-Pruning-for-Deep-Neural-Networks”☆183Updated 4 years ago
- CNN channel pruning, LeGR, MorphNet, AMC. Codebase for paper "LeGR: Filter Pruning via Learned Global Ranking"☆115Updated 5 years ago
- Implementation of NetAdapt☆30Updated 4 years ago
- [ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions, surpassing MobileNetV2☆102Updated 5 years ago
- Code for IJCAI2019 paper☆46Updated 6 years ago
- Codes of Centripetal SGD☆63Updated 3 years ago