HuangCongQing / model-compression-optimization
model compression and optimization for deployment for Pytorch, including knowledge distillation, quantization and pruning.(知识蒸馏,量化,剪枝)
☆18Updated 7 months ago
Alternatives and similar repositories for model-compression-optimization:
Users that are interested in model-compression-optimization are comparing it to the libraries listed below
- The official implementation of LumiNet: The Bright Side of Perceptual Knowledge Distillation https://arxiv.org/abs/2310.03669☆19Updated last year
- ☆46Updated 2 years ago
- ☆14Updated 4 years ago
- 模型压缩demo(剪枝、量化、知识蒸馏)☆74Updated 5 years ago
- Official implementation for "Knowledge Distillation with Refined Logits".☆13Updated 8 months ago
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆67Updated last year
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- Model Compression 1. Pruning(BN Pruning) 2. Knowledge Distillation (Hinton) 3. Quantization (MNN) 4. Deployment (MNN)☆79Updated 4 years ago
- ☆17Updated 3 years ago
- EQ-Net [ICCV 2023]☆29Updated last year
- Implementation of Conv-based and Vit-based networks designed for CIFAR.☆71Updated 2 years ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆172Updated 4 months ago
- ☆26Updated last year
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆74Updated last year
- ☆33Updated last year
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆44Updated 6 months ago
- ☆25Updated 2 years ago
- Auto-Prox-AAAI24☆12Updated 11 months ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆63Updated 7 months ago
- 用于 MobileNetV3 在自定义数据集上的量化,模型压缩90%而精度几乎不受影响,论文:HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆17Updated 3 years ago
- Pytorch implementation of our paper (TNNLS) -- Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters☆12Updated 3 years ago
- The official project website of "Small Scale Data-Free Knowledge Distillation" (SSD-KD for short, published in CVPR 2024).☆17Updated 10 months ago
- Jupyter notebook tutorials for MMDeploy☆35Updated 2 years ago
- provide some new architecture, channel pruning and quantization methods for yolov5☆29Updated 6 months ago
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆65Updated 2 years ago
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆29Updated 2 years ago
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- ☆32Updated 4 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 8 months ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆82Updated last year