memoiry / Awesome-model-compression-and-accelerationLinks
☆668Updated 3 years ago
Alternatives and similar repositories for Awesome-model-compression-and-acceleration
Users that are interested in Awesome-model-compression-and-acceleration are comparing it to the libraries listed below
Sorting:
- Papers for deep neural network compression and acceleration☆399Updated 4 years ago
- Collection of recent methods on (deep) neural network compression and acceleration.☆948Updated 3 months ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆852Updated 4 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆443Updated last year
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,513Updated 5 years ago
- a list of awesome papers on deep model ompression and acceleration☆351Updated 4 years ago
- Summary, Code for Deep Neural Network Quantization☆549Updated last month
- papers about model compression☆166Updated 2 years ago
- Network Slimming (Pytorch) (ICCV 2017)☆912Updated 4 years ago
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV 2019.☆354Updated 5 years ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆526Updated 9 months ago
- Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.☆569Updated 6 years ago
- Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.☆574Updated 6 years ago
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks☆380Updated 5 years ago
- PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference☆882Updated 6 years ago
- Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019☆922Updated 2 years ago
- knowledge distillation papers☆757Updated 2 years ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,083Updated last year
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Updated 11 months ago
- PyTorch implementation of 'Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding' by …☆421Updated 5 years ago
- [ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware☆1,442Updated 10 months ago
- Code for: "And the bit goes down: Revisiting the quantization of neural networks"☆633Updated 4 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆177Updated 2 years ago
- Model Quantization Benchmark☆820Updated 2 months ago
- Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"☆963Updated 3 years ago
- Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)☆617Updated last year
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆389Updated 4 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,253Updated 2 months ago
- A curated list of neural network pruning resources.☆2,462Updated last year
- ☆196Updated 11 months ago