zepx / pytorch-weight-prune
Pytorch version for weight pruning for Murata Group's CREST project
☆58Updated 6 years ago
Related projects: ⓘ
- PyTorch Implementation of Weights Pruning☆184Updated 6 years ago
- A Pytorch implementation of Neural Network Compression (pruning, deep compression, channel pruning)☆154Updated 2 months ago
- ☆212Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆164Updated 4 years ago
- A PyTorch implementation of the iterative pruning method described in Han et. al. (2015)☆40Updated 5 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆113Updated 4 years ago
- Compress neural network with pruning and quantization using TensorFlow.☆105Updated 5 years ago
- Reducing the size of convolutional neural networks☆112Updated 6 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆108Updated 5 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆174Updated last year
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆124Updated 6 years ago
- Class Project for 18663 - Implementation of FBNet (Hardware-Aware DNAS)☆32Updated 4 years ago
- Code for https://arxiv.org/abs/1810.04622☆139Updated 5 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆239Updated 2 years ago
- Repository containing pruned models and related information☆35Updated 3 years ago
- ENC: A fast network compression platform - Caffe Implementation☆22Updated 5 years ago
- ☆66Updated 4 years ago
- Repository to track the progress in model compression and acceleration☆103Updated 3 years ago
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124☆71Updated 5 years ago
- ☆53Updated 5 years ago
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆194Updated 4 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆75Updated 5 years ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆104Updated 3 years ago
- Training with FP16 weights in PyTorch☆76Updated 5 years ago
- ☆38Updated 6 years ago
- Code example for the ICLR 2018 oral paper☆150Updated 6 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆165Updated 3 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆79Updated 6 years ago
- This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)☆19Updated 5 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆32Updated 5 years ago