Awesome machine learning model compression research papers, quantization, tools, and learning material.
☆540Sep 21, 2024Updated last year
Alternatives and similar repositories for awesome-ml-model-compression
Users that are interested in awesome-ml-model-compression are comparing it to the libraries listed below
Sorting:
- ☆669Aug 25, 2021Updated 4 years ago
- A curated list of neural network pruning resources.☆2,492Apr 4, 2024Updated last year
- papers about model compression☆166Feb 10, 2023Updated 3 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,327Jan 29, 2026Updated last month
- Collection of recent methods on (deep) neural network compression and acceleration.☆955Apr 4, 2025Updated 11 months ago
- Papers for deep neural network compression and acceleration☆402Jun 21, 2021Updated 4 years ago
- Summary, Code for Deep Neural Network Quantization☆558Jun 14, 2025Updated 8 months ago
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆204Feb 10, 2025Updated last year
- List of papers related to neural network quantization in recent AI conferences and journals.☆805Mar 27, 2025Updated 11 months ago
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆857Jun 19, 2021Updated 4 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Sep 9, 2025Updated 5 months ago
- Awesome LLM compression research papers and tools.☆1,786Feb 23, 2026Updated last week
- A curated list for Efficient Large Language Models☆1,959Jun 17, 2025Updated 8 months ago
- a list of awesome papers on deep model ompression and acceleration☆348Jun 19, 2021Updated 4 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,516Jun 7, 2020Updated 5 years ago
- Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)☆2,336Sep 26, 2022Updated 3 years ago
- Embedded and mobile deep learning research resources☆762Mar 14, 2023Updated 2 years ago
- Awesome Knowledge Distillation☆3,820Dec 25, 2025Updated 2 months ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,269May 6, 2025Updated 9 months ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,940Dec 14, 2023Updated 2 years ago
- Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。☆2,654May 30, 2023Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,612Jul 12, 2024Updated last year
- knowledge distillation papers☆768Feb 10, 2023Updated 3 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Jul 25, 2024Updated last year
- [ECCV 2024] SparseRefine: Sparse Refinement for Efficient High-Resolution Semantic Segmentation☆14Jan 10, 2025Updated last year
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,262Sep 7, 2025Updated 5 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,106Oct 7, 2024Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 2 years ago
- Model Quantization Benchmark☆858Apr 20, 2025Updated 10 months ago
- YOLO ModelCompression MultidatasetTraining☆445Jun 21, 2022Updated 3 years ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility☆1,982Mar 25, 2023Updated 2 years ago
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,088May 2, 2024Updated last year
- PyTorch Model Compression☆234Jan 27, 2023Updated 3 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,914Mar 31, 2023Updated 2 years ago
- ☆20Aug 16, 2021Updated 4 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆453May 15, 2023Updated 2 years ago
- Pytorch implementation of various Knowledge Distillation (KD) methods.☆1,745Nov 25, 2021Updated 4 years ago