cedrickchee / awesome-ml-model-compressionLinks
Awesome machine learning model compression research papers, quantization, tools, and learning material.
☆541Updated last year
Alternatives and similar repositories for awesome-ml-model-compression
Users that are interested in awesome-ml-model-compression are comparing it to the libraries listed below
Sorting:
- Summary, Code for Deep Neural Network Quantization☆558Updated 5 months ago
- Collection of recent methods on (deep) neural network compression and acceleration.☆953Updated 8 months ago
- ☆670Updated 4 years ago
- Papers for deep neural network compression and acceleration☆403Updated 4 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,280Updated 9 months ago
- papers about model compression☆166Updated 2 years ago
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆445Updated 2 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆451Updated 2 years ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆279Updated last year
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆856Updated 4 years ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆764Updated 8 months ago
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆399Updated 4 years ago
- A curated list of neural network pruning resources.☆2,485Updated last year
- Model Quantization Benchmark☆852Updated 7 months ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆631Updated 4 months ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆432Updated 2 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆360Updated last year
- knowledge distillation papers☆764Updated 2 years ago
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆310Updated last year
- PyTorch implementation for the APoT quantization (ICLR 2020)☆281Updated 11 months ago
- A library for researching neural networks compression and acceleration methods.☆140Updated 3 months ago
- A simple network quantization demo using pytorch from scratch.☆539Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆284Updated 4 years ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆263Updated 2 years ago
- Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design☆160Updated 4 years ago
- ☆207Updated 4 years ago
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quan…☆650Updated 2 years ago
- Repository to track the progress in model compression and acceleration☆106Updated 4 years ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,517Updated 5 years ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,940Updated last year