agoncharenko1992 / FAT-fast-adjustable-thresholdLinks
This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)
☆19Updated 6 years ago
Alternatives and similar repositories for FAT-fast-adjustable-threshold
Users that are interested in FAT-fast-adjustable-threshold are comparing it to the libraries listed below
Sorting:
- Class Project for 18663 - Implementation of FBNet (Hardware-Aware DNAS)☆34Updated 5 years ago
- Some recent Quantizing techniques on PyTorch☆72Updated 6 years ago
- Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet☆46Updated 7 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆72Updated 6 years ago
- Two-Step Quantization on AlexNet☆13Updated 7 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆41Updated 7 years ago
- ☆46Updated 6 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 7 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆31Updated 6 years ago
- ☆214Updated 6 years ago
- ☆39Updated 7 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆126Updated 7 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆167Updated 4 years ago
- ☆66Updated 5 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 6 years ago
- ☆88Updated 7 years ago
- Caffe implementation of Optimal-Ternary-Weights-Approximation in "Two-Step Quantization for Low-bit Neural Networks" (CVPR2018).☆14Updated 7 years ago
- Neural architecture search(NAS)☆14Updated 6 years ago
- Related Paper of Efficient Deep Neural Networks☆86Updated 4 years ago
- Reducing the size of convolutional neural networks☆112Updated 7 years ago
- ☆137Updated 6 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Updated last year
- Caffe Implementation for Incremental network quantization☆191Updated 7 years ago
- Code for IJCAI2019 paper☆46Updated 6 years ago
- BMXNet 2: An Open-Source Binary Neural Network Implementation Based on MXNet☆231Updated 3 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆81Updated 7 years ago
- Training Low-bits DNNs with Stochastic Quantization☆74Updated 8 years ago
- [ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions, surpassing MobileNetV2☆101Updated 5 years ago
- Efficient forward propagation for BCNNs☆49Updated 8 years ago