IntelLabs / distillerLinks
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
☆4,401Updated 2 years ago
Alternatives and similar repositories for distiller
Users that are interested in distiller are comparing it to the libraries listed below
Sorting:
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,514Updated 5 years ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,928Updated last year
- [ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware☆1,449Updated last year
- A curated list of neural network pruning resources.☆2,476Updated last year
- Model analyzer in PyTorch☆1,495Updated 2 years ago
- Awesome Knowledge Distillation☆3,732Updated 3 months ago
- Codebase for Image Classification Research, written in PyTorch.☆2,163Updated last year
- PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"☆2,724Updated 2 years ago
- Count the MACs / FLOPs of your PyTorch model.☆5,046Updated last year
- Differentiable architecture search for convolutional and recurrent networks☆3,972Updated 4 years ago
- Flops counter for neural networks in pytorch framework☆2,943Updated 3 weeks ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,256Updated 4 months ago
- Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)☆2,306Updated 2 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,909Updated 2 years ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility☆1,963Updated 2 years ago
- Memory consumption and FLOP count estimates for convnets☆927Updated 6 years ago
- Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS☆1,579Updated last year
- PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference☆884Updated 6 years ago
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,811Updated last month
- Collection of recent methods on (deep) neural network compression and acceleration.☆952Updated 5 months ago
- A PyTorch implementation of MobileNet V2 architecture and pretrained model.☆1,398Updated 5 years ago
- On the Variance of the Adaptive Learning Rate and Beyond☆2,554Updated 4 years ago
- ☆669Updated 4 years ago
- Automated deep learning algorithms implemented in PyTorch.☆1,583Updated 3 years ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆852Updated 4 years ago
- The convertor/conversion of deep learning models for different deep learning frameworks/softwares.☆3,250Updated 2 years ago
- ☆1,501Updated 5 years ago
- Improving Convolutional Networks via Attention Transfer (ICLR 2017)☆1,459Updated 7 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,542Updated 6 years ago
- Official Implementation of 'Fast AutoAugment' in PyTorch.☆1,610Updated 4 years ago