irapha / replayed_distillationLinks
Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)
☆81Updated 7 years ago
Alternatives and similar repositories for replayed_distillation
Users that are interested in replayed_distillation are comparing it to the libraries listed below
Sorting:
- Low-rank convolutional neural networks☆97Updated 9 years ago
- ☆55Updated 6 years ago
- Code for https://arxiv.org/abs/1810.04622☆141Updated 5 years ago
- Structured Bayesian Pruning, NIPS 2017☆74Updated 5 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 7 years ago
- ☆34Updated 8 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆126Updated 6 years ago
- DSD model zoo. Better accuracy models from DSD training on Imagenet with same model architecture☆113Updated 6 years ago
- TensorFlow implementation of PNASNet-5 on ImageNet☆101Updated 6 years ago
- PyTorch Implementation of Weights Pruning☆184Updated 7 years ago
- Implementation of Progressive Neural Architecture Search in Keras and Tensorflow☆118Updated 6 years ago
- Implementation for Trained Ternary Network.☆108Updated 8 years ago
- ☆87Updated 7 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 6 years ago
- This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)☆19Updated 6 years ago
- Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee☆60Updated 7 years ago
- ☆135Updated 6 years ago
- Caffe implementation for dynamic network surgery.☆187Updated 7 years ago
- Torch implementation of CVPR'17 - Local Binary Convolutional Neural Networks http://xujuefei.com/lbcnn.html☆103Updated 6 years ago
- Training wide residual networks for deployment using a single bit for each weight - Official Code Repository for ICLR 2018 Published Pape…☆37Updated 5 years ago
- This is my final year project of Bachelor of Engineering. Its still incomplete though. I am trying to replicate the research paper "Deep …☆76Updated 7 years ago
- Training Low-bits DNNs with Stochastic Quantization☆74Updated 8 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆72Updated 6 years ago
- ☆23Updated 9 years ago
- Tensorflow codes for "Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers"☆30Updated 5 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆78Updated 6 years ago
- Code for Decorrelated Batch Normalization☆82Updated 7 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Updated last year
- [ICML 2018] "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"☆152Updated 3 years ago
- transfer learning written in mxnet☆55Updated 7 years ago