irapha / replayed_distillation
Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)
☆81Updated 7 years ago
Alternatives and similar repositories for replayed_distillation:
Users that are interested in replayed_distillation are comparing it to the libraries listed below
- Low-rank convolutional neural networks☆97Updated 9 years ago
- ☆34Updated 8 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆124Updated 6 years ago
- Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee☆59Updated 6 years ago
- ☆135Updated 6 years ago
- Reducing the size of convolutional neural networks☆113Updated 7 years ago
- TensorFlow implementation of PNASNet-5 on ImageNet☆101Updated 6 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 6 years ago
- Implementation for Trained Ternary Network.☆108Updated 8 years ago
- This is my final year project of Bachelor of Engineering. Its still incomplete though. I am trying to replicate the research paper "Deep …☆76Updated 7 years ago
- ☆87Updated 6 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 5 years ago
- Training Low-bits DNNs with Stochastic Quantization☆73Updated 7 years ago
- ☆55Updated 6 years ago
- Pytorch version for weight pruning for Murata Group's CREST project☆57Updated 6 years ago
- Tensorflow codes for "Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers"☆30Updated 5 years ago
- Cheap distillation for convolutional neural networks.☆33Updated 6 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization at CVPR'19☆48Updated 5 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆71Updated 6 years ago
- Binary Convolution Network for faster real-time processing in ASICs☆56Updated 6 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆77Updated 6 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 6 years ago
- Related Paper of Efficient Deep Neural Networks☆86Updated 3 years ago
- DSD model zoo. Better accuracy models from DSD training on Imagenet with same model architecture☆113Updated 6 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- Implementation of our accepted CVPR 2018 paper "Rethinking Feature Distribution for Loss Functions in Image Classification"☆173Updated 7 years ago
- Binarized Dense and Conv2D layers for Keras☆17Updated 7 years ago
- A tutorial on 'Soft weight-sharing for Neural Network compression' published at ICLR2017☆145Updated 8 years ago