mrgloom / Network-Speed-and-CompressionLinks
Network acceleration methods
☆178Updated 4 years ago
Alternatives and similar repositories for Network-Speed-and-Compression
Users that are interested in Network-Speed-and-Compression are comparing it to the libraries listed below
Sorting:
- a list of awesome papers on deep model ompression and acceleration☆349Updated 4 years ago
- Hands-on Tutorial on Automated Deep Learning☆149Updated 5 years ago
- papers about model compression☆166Updated 2 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆72Updated 7 years ago
- Some recent Quantizing techniques on PyTorch☆72Updated 6 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆168Updated 4 years ago
- This is my final year project of Bachelor of Engineering. Its still incomplete though. I am trying to replicate the research paper "Deep …☆76Updated 8 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- I demonstrate how to compress a neural network using pruning in tensorflow.☆78Updated 8 years ago
- ☆67Updated 6 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- Related Paper of Efficient Deep Neural Networks☆86Updated 4 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 6 years ago
- Papers for deep neural network compression and acceleration☆403Updated 4 years ago
- Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet☆46Updated 7 years ago
- model-compression-and-acceleration-4-DNN☆21Updated 7 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures☆164Updated 5 years ago
- A PyTorch implementation of Mnasnet: MnasNet: Platform-Aware Neural Architecture Search for Mobile.☆275Updated 7 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Updated last year
- ☆213Updated 7 years ago
- Caffe implementation for dynamic network surgery.☆188Updated 8 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆148Updated 7 years ago
- Neural architecture search(NAS)☆14Updated 6 years ago
- Implementation of model compression with knowledge distilling method.☆342Updated 9 years ago
- implementation of Iterative Pruning for Deep neural network [Han2015].☆40Updated 7 years ago
- A pyCaffe implementaion of the 2017 ICLR's "Pruning Filters for Efficient ConvNets" publication☆43Updated 7 years ago
- FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search☆304Updated last year
- BMXNet 2: An Open-Source Binary Neural Network Implementation Based on MXNet☆232Updated 3 years ago
- ☆138Updated 6 years ago