chester256 / Model-Compression-Papers
Papers for deep neural network compression and acceleration
☆397Updated 3 years ago
Alternatives and similar repositories for Model-Compression-Papers:
Users that are interested in Model-Compression-Papers are comparing it to the libraries listed below
- ☆664Updated 3 years ago
- a list of awesome papers on deep model ompression and acceleration☆350Updated 3 years ago
- papers about model compression☆166Updated last year
- Collection of recent methods on (deep) neural network compression and acceleration.☆937Updated last month
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆436Updated last year
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆331Updated 6 months ago
- Summary, Code for Deep Neural Network Quantization☆540Updated 3 months ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆502Updated 4 months ago
- Network acceleration methods☆178Updated 3 years ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,511Updated 4 years ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆847Updated 3 years ago
- PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference☆877Updated 5 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆265Updated 5 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆377Updated 5 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆176Updated 2 years ago
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours☆396Updated 4 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆352Updated 4 years ago
- Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019☆915Updated last year
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆378Updated 3 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Updated 4 years ago
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆634Updated 4 years ago
- PyTorch implementation of 'Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding' by …☆416Updated 4 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆315Updated 5 years ago
- ☆213Updated 6 years ago
- Implementation of model compression with knowledge distilling method.☆344Updated 8 years ago
- Quantization of Convolutional Neural networks.☆243Updated 5 months ago
- FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search☆303Updated 6 months ago
- Repository to track the progress in model compression and acceleration☆105Updated 3 years ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 5 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆427Updated last year