mit-han-lab / once-for-all
[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
☆1,902Updated last year
Alternatives and similar repositories for once-for-all:
Users that are interested in once-for-all are comparing it to the libraries listed below
- [ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware☆1,433Updated 6 months ago
- Mobile vision models and code☆911Updated last week
- Collection of recent methods on (deep) neural network compression and acceleration.☆939Updated 3 months ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,516Updated 4 years ago
- Automated deep learning algorithms implemented in PyTorch.☆1,575Updated 2 years ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,376Updated last year
- Flops counter for convolutional networks in pytorch framework☆2,869Updated 2 months ago
- A curated list of neural network pruning resources.☆2,426Updated 11 months ago
- Codebase for Image Classification Research, written in PyTorch.☆2,150Updated last year
- Model analyzer in PyTorch☆1,477Updated 2 years ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,249Updated last week
- A general and accurate MACs / FLOPs profiler for PyTorch models☆601Updated 10 months ago
- Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019☆917Updated 2 years ago
- Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS☆1,568Updated 9 months ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆428Updated last year
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆440Updated last year
- Count the MACs / FLOPs of your PyTorch model.☆4,964Updated 8 months ago
- ☆668Updated 3 years ago
- Collection of common code that's shared among different research projects in FAIR computer vision team.☆2,093Updated 3 months ago
- Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)☆2,270Updated 2 years ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆852Updated 3 years ago
- A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. 🏆25 knowledge distillation methods p…☆1,467Updated last week
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆634Updated 4 years ago
- Network Slimming (Pytorch) (ICCV 2017)☆916Updated 4 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,230Updated 3 years ago
- 72.8% MobileNetV2 1.0 model on ImageNet and a spectrum of pre-trained MobileNetV2 models☆695Updated 4 years ago
- Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"☆958Updated 3 years ago
- [CVPR 2023] DepGraph: Towards Any Structural Pruning☆2,928Updated 2 weeks ago
- Summary, Code for Deep Neural Network Quantization☆546Updated 5 months ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility☆1,921Updated last year