Awesome machine learning model compression research papers, quantization, tools, and learning material.
☆543Sep 21, 2024Updated last year
Alternatives and similar repositories for awesome-ml-model-compression
Users that are interested in awesome-ml-model-compression are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆670Aug 25, 2021Updated 4 years ago
- A curated list of neural network pruning resources.☆2,492Apr 4, 2024Updated 2 years ago
- papers about model compression☆166Feb 10, 2023Updated 3 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,343Apr 5, 2026Updated last week
- Papers for deep neural network compression and acceleration☆401Jun 21, 2021Updated 4 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Collection of recent methods on (deep) neural network compression and acceleration.☆954Apr 4, 2025Updated last year
- Summary, Code for Deep Neural Network Quantization☆558Jun 14, 2025Updated 10 months ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆814Mar 27, 2025Updated last year
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆205Feb 10, 2025Updated last year
- A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures,…☆857Jun 19, 2021Updated 4 years ago
- Awesome LLM compression research papers and tools.☆1,806Feb 23, 2026Updated last month
- A curated list for Efficient Large Language Models☆1,980Jun 17, 2025Updated 9 months ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Sep 9, 2025Updated 7 months ago
- a list of awesome papers on deep model ompression and acceleration☆350Jun 19, 2021Updated 4 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,515Jun 7, 2020Updated 5 years ago
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,269May 6, 2025Updated 11 months ago
- Embedded and mobile deep learning research resources☆766Mar 14, 2023Updated 3 years ago
- Awesome Knowledge Distillation☆3,844Mar 22, 2026Updated 3 weeks ago
- [ECCV 2024] SparseRefine: Sparse Refinement for Efficient High-Resolution Semantic Segmentation☆15Jan 10, 2025Updated last year
- Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)☆2,334Sep 26, 2022Updated 3 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,632Jul 12, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,948Dec 14, 2023Updated 2 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Jul 25, 2024Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆61Mar 23, 2023Updated 3 years ago
- Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。☆2,664May 30, 2023Updated 2 years ago
- knowledge distillation papers☆765Feb 10, 2023Updated 3 years ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,115Oct 7, 2024Updated last year
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,284Sep 7, 2025Updated 7 months ago
- PyTorch Model Compression☆234Jan 27, 2023Updated 3 years ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆129Sep 23, 2025Updated 6 months ago
- Model Quantization Benchmark☆864Apr 20, 2025Updated 11 months ago
- Dynamically Reconfigurable Architecture Template and Cycle-level Microarchitecture Simulator for Dataflow AcCelerators☆30Jul 17, 2023Updated 2 years ago
- YOLO ModelCompression MultidatasetTraining☆444Jun 21, 2022Updated 3 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,909Mar 31, 2023Updated 3 years ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆324Mar 4, 2025Updated last year
- Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)☆1,089May 2, 2024Updated last year