irapha / replayed_distillation
Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)
☆81Updated 7 years ago
Alternatives and similar repositories for replayed_distillation:
Users that are interested in replayed_distillation are comparing it to the libraries listed below
- Low-rank convolutional neural networks☆97Updated 9 years ago
- ☆33Updated 8 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 6 years ago
- TensorFlow implementation of PNASNet-5 on ImageNet☆101Updated 6 years ago
- Training Low-bits DNNs with Stochastic Quantization☆73Updated 7 years ago
- Implementation of Progressive Neural Architecture Search in Keras and Tensorflow☆118Updated 6 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆124Updated 6 years ago
- Tensorflow codes for "Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers"☆30Updated 5 years ago
- Neural architecture search(NAS)☆14Updated 5 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆146Updated 6 years ago
- ☆55Updated 6 years ago
- Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization at CVPR'19☆48Updated 5 years ago
- ICLR 2018 reproducibility challenge - Multi-Scale Dense Convolutional Networks for Efficient Prediction☆135Updated 6 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆71Updated 6 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 5 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- Implementation for Trained Ternary Network.☆108Updated 8 years ago
- AutoDIAL Caffe Implementation☆27Updated 7 years ago
- ☆87Updated 6 years ago
- Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee☆59Updated 6 years ago
- Pytorch implementation of SNAS☆75Updated 6 years ago
- [ECCV 2018] Sparsely Aggreagated Convolutional Networks https://arxiv.org/abs/1801.05895☆124Updated 6 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆77Updated 6 years ago
- This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)☆19Updated 6 years ago
- Project page for Heated-up Softmax Embedding☆43Updated 6 years ago
- BlockDrop: Dynamic Inference Paths in Residual Networks☆142Updated 2 years ago
- ☆135Updated 6 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆331Updated 8 months ago
- Recreated deep compression's pruning, quantization and huffman encoding pipeline☆9Updated 6 years ago