Awesome machine learning model compression research papers, quantization, tools, and learning material.
☆539Sep 21, 2024Updated last year
Alternatives and similar repositories for awesome-ml-model-compression
Users that are interested in awesome-ml-model-compression are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆668Aug 25, 2021Updated 4 years ago
- A curated list of neural network pruning resources.☆2,491Apr 4, 2024Updated last year
- papers about model compression☆166Feb 10, 2023Updated 3 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,334Jan 29, 2026Updated last month
- Papers for deep neural network compression and acceleration☆401Jun 21, 2021Updated 4 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Collection of recent methods on (deep) neural network compression and acceleration.☆954Apr 4, 2025Updated 11 months ago
- Summary, Code for Deep Neural Network Quantization☆559Jun 14, 2025Updated 9 months ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆809Mar 27, 2025Updated 11 months ago
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆203Feb 10, 2025Updated last year
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆856Jun 19, 2021Updated 4 years ago
- Awesome LLM compression research papers and tools.☆1,794Feb 23, 2026Updated last month
- A curated list for Efficient Large Language Models☆1,968Jun 17, 2025Updated 9 months ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Sep 9, 2025Updated 6 months ago
- a list of awesome papers on deep model ompression and acceleration☆350Jun 19, 2021Updated 4 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,517Jun 7, 2020Updated 5 years ago
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,270May 6, 2025Updated 10 months ago
- Embedded and mobile deep learning research resources☆765Mar 14, 2023Updated 3 years ago
- [ECCV 2024] SparseRefine: Sparse Refinement for Efficient High-Resolution Semantic Segmentation☆14Jan 10, 2025Updated last year
- Awesome Knowledge Distillation☆3,826Updated this week
- Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)☆2,337Sep 26, 2022Updated 3 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,625Jul 12, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,944Dec 14, 2023Updated 2 years ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 3 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Jul 25, 2024Updated last year
- Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。☆2,657May 30, 2023Updated 2 years ago
- knowledge distillation papers☆766Feb 10, 2023Updated 3 years ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,111Oct 7, 2024Updated last year
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,275Sep 7, 2025Updated 6 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- PyTorch Model Compression☆234Jan 27, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆128Sep 23, 2025Updated 6 months ago
- Model Quantization Benchmark☆862Apr 20, 2025Updated 11 months ago
- Dynamically Reconfigurable Architecture Template and Cycle-level Microarchitecture Simulator for Dataflow AcCelerators☆30Jul 17, 2023Updated 2 years ago
- YOLO ModelCompression MultidatasetTraining☆444Jun 21, 2022Updated 3 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,911Mar 31, 2023Updated 2 years ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Mar 4, 2025Updated last year
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,088May 2, 2024Updated last year