chester256 / Model-Compression-Papers
Papers for deep neural network compression and acceleration
☆396Updated 3 years ago
Alternatives and similar repositories for Model-Compression-Papers:
Users that are interested in Model-Compression-Papers are comparing it to the libraries listed below
- ☆665Updated 3 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 3 years ago
- Collection of recent methods on (deep) neural network compression and acceleration.☆936Updated 3 months ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆332Updated 7 months ago
- Summary, Code for Deep Neural Network Quantization☆544Updated 4 months ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆504Updated 5 months ago
- papers about model compression☆167Updated 2 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆439Updated last year
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,513Updated 4 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆176Updated 2 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆264Updated 5 years ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆850Updated 3 years ago
- PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference☆877Updated 5 years ago
- Network acceleration methods☆178Updated 3 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆379Updated 5 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆354Updated 4 years ago
- Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019☆916Updated last year
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆396Updated 4 years ago
- knowledge distillation papers☆748Updated 2 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆315Updated 5 years ago
- PyTorch implementation of 'Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding' by …☆418Updated 5 years ago
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆379Updated 4 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆428Updated last year
- A Pytorch implementation of Neural Network Compression (pruning, deep compression, channel pruning)☆155Updated 7 months ago
- ☆213Updated 6 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 5 years ago
- Repository to track the progress in model compression and acceleration☆105Updated 3 years ago
- Sparse learning library and sparse momentum resources.☆379Updated 2 years ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆256Updated 5 years ago
- Quantization of Convolutional Neural networks.☆243Updated 6 months ago