mrgloom / Network-Speed-and-CompressionLinks
Network acceleration methods
☆177Updated 4 years ago
Alternatives and similar repositories for Network-Speed-and-Compression
Users that are interested in Network-Speed-and-Compression are comparing it to the libraries listed below
Sorting:
- a list of awesome papers on deep model ompression and acceleration☆351Updated 4 years ago
- Hands-on Tutorial on Automated Deep Learning☆149Updated 4 years ago
- Some recent Quantizing techniques on PyTorch☆72Updated 5 years ago
- papers about model compression☆166Updated 2 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆167Updated 4 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆72Updated 6 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Updated 11 months ago
- A PyTorch implementation of Mnasnet: MnasNet: Platform-Aware Neural Architecture Search for Mobile.☆274Updated 6 years ago
- model-compression-and-acceleration-4-DNN☆21Updated 6 years ago
- ☆67Updated 5 years ago
- This is my final year project of Bachelor of Engineering. Its still incomplete though. I am trying to replicate the research paper "Deep …☆76Updated 7 years ago
- Code for https://arxiv.org/abs/1810.04622☆141Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- I demonstrate how to compress a neural network using pruning in tensorflow.☆78Updated 7 years ago
- Papers for deep neural network compression and acceleration☆399Updated 4 years ago
- Bridging the gap Between Stability and Scalability in Neural Architecture Search☆141Updated 3 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- Neural architecture search(NAS)☆14Updated 6 years ago
- caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342☆147Updated 6 years ago
- Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures☆164Updated 4 years ago
- Implementation of model compression with knowledge distilling method.☆343Updated 8 years ago
- Caffe implementation for dynamic network surgery.☆187Updated 7 years ago
- ☆87Updated 6 years ago
- FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search☆302Updated 11 months ago
- Related Paper of Efficient Deep Neural Networks☆86Updated 4 years ago
- Caffe implementation of ICCV 2017 & TPAMI 2018 paper - ThiNet☆46Updated 6 years ago
- ☆45Updated 5 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- Code and Pretrained model for IGCV3☆189Updated 6 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆242Updated 2 years ago