mrgloom / Network-Speed-and-Compression
Network acceleration methods
☆178Updated 3 years ago
Alternatives and similar repositories for Network-Speed-and-Compression:
Users that are interested in Network-Speed-and-Compression are comparing it to the libraries listed below
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Hands-on Tutorial on Automated Deep Learning☆149Updated 4 years ago
- A PyTorch implementation of Mnasnet: MnasNet: Platform-Aware Neural Architecture Search for Mobile.☆273Updated 6 years ago
- ☆67Updated 5 years ago
- Caffe implementation for dynamic network surgery.☆186Updated 7 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆166Updated 4 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 5 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- Papers for deep neural network compression and acceleration☆397Updated 3 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆332Updated 9 months ago
- papers about model compression☆166Updated 2 years ago
- Related Paper of Efficient Deep Neural Networks☆86Updated 3 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆71Updated 6 years ago
- Neural architecture search(NAS)☆14Updated 6 years ago
- Code and Pretrained model for IGCV3☆189Updated 6 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆242Updated 2 years ago
- ☆87Updated 6 years ago
- ☆213Updated 6 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 6 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- model-compression-and-acceleration-4-DNN☆21Updated 6 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- Bridging the gap Between Stability and Scalability in Neural Architecture Search☆141Updated 3 years ago
- A pyCaffe implementaion of the 2017 ICLR's "Pruning Filters for Efficient ConvNets" publication☆43Updated 6 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆395Updated 4 years ago
- Caffe Implementation for Incremental network quantization☆192Updated 6 years ago
- Label Refinery: Improving ImageNet Classification through Label Progression☆279Updated 6 years ago
- I demonstrate how to compress a neural network using pruning in tensorflow.☆78Updated 7 years ago
- Single Path One-Shot NAS MXNet implementation with full training and searching pipeline. Support both Block and Channel Selection. Search…☆151Updated 5 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago