cedrickchee / awesome-ml-model-compressionLinks
Awesome machine learning model compression research papers, quantization, tools, and learning material.
☆527Updated 10 months ago
Alternatives and similar repositories for awesome-ml-model-compression
Users that are interested in awesome-ml-model-compression are comparing it to the libraries listed below
Sorting:
- Collection of recent methods on (deep) neural network compression and acceleration.☆948Updated 4 months ago
- Summary, Code for Deep Neural Network Quantization☆552Updated last month
- ☆668Updated 3 years ago
- Papers for deep neural network compression and acceleration☆400Updated 4 years ago
- papers about model compression☆166Updated 2 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆443Updated last year
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆443Updated 2 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,177Updated 5 months ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆280Updated last year
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆357Updated last year
- List of papers related to neural network quantization in recent AI conferences and journals.☆677Updated 4 months ago
- A curated list of neural network pruning resources.☆2,468Updated last year
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆392Updated 4 years ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆277Updated 7 months ago
- Model Quantization Benchmark☆827Updated 3 months ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆430Updated 2 years ago
- Repository to track the progress in model compression and acceleration☆106Updated 4 years ago
- A library for researching neural networks compression and acceleration methods.☆138Updated 11 months ago
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆308Updated 10 months ago
- ☆205Updated 3 years ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,514Updated 5 years ago
- Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design☆161Updated 4 years ago
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quan…☆640Updated 2 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆624Updated last week
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆852Updated 4 years ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆262Updated last year
- Pytorch implementation of BRECQ, ICLR 2021☆282Updated 4 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Updated last year
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆178Updated 2 years ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,927Updated last year