Eric-mingjie / rethinking-network-pruning
Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)
☆1,514Updated 4 years ago
Alternatives and similar repositories for rethinking-network-pruning:
Users that are interested in rethinking-network-pruning are comparing it to the libraries listed below
- Network Slimming (Pytorch) (ICCV 2017)☆914Updated 4 years ago
- Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019☆917Updated 2 years ago
- PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference☆879Updated 5 years ago
- Collection of recent methods on (deep) neural network compression and acceleration.☆944Updated last month
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆441Updated last year
- ☆669Updated 3 years ago
- Papers for deep neural network compression and acceleration☆397Updated 3 years ago
- A curated list of neural network pruning resources.☆2,437Updated last year
- Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.☆568Updated 5 years ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,082Updated last year
- Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.☆572Updated 5 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆381Updated 5 years ago
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆633Updated 4 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆355Updated 4 years ago
- PyTorch implementation of 'Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding' by …☆418Updated 5 years ago
- [ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware☆1,440Updated 8 months ago
- Summary, Code for Deep Neural Network Quantization☆547Updated 6 months ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,390Updated 2 years ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,913Updated last year
- Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)☆614Updated last year
- Model analyzer in PyTorch☆1,483Updated 2 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,247Updated 2 weeks ago
- NAS-Bench-201 API and Instruction☆632Updated 4 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆318Updated 5 years ago
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularization☆587Updated 2 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆333Updated 9 months ago
- PyTorch DataLoaders implemented with DALI for accelerating image preprocessing☆881Updated 4 years ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆853Updated 3 years ago
- Official PyTorch implementation of "A Comprehensive Overhaul of Feature Distillation" (ICCV 2019)☆417Updated 4 years ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility☆1,932Updated 2 years ago