ososos888 / prune-then-distill
☆47Updated 2 years ago
Alternatives and similar repositories for prune-then-distill:
Users that are interested in prune-then-distill are comparing it to the libraries listed below
- The official PyTorch implementation of CHEX: CHannel EXploration for CNN Model Compression (CVPR 2022). Paper is available at https://ope…☆38Updated 2 years ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆75Updated last year
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Updated 3 years ago
- channel pruning for accelerating very deep neural networks☆13Updated 4 years ago
- ☆14Updated 4 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆82Updated last year
- ☆26Updated last year
- An extension version of our paper accepted by CVPR 2020, Oral -- HRank: Filter Pruning using High-Rank Feature Map☆149Updated 3 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 9 months ago
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆32Updated 3 years ago
- Implementation of Conv-based and Vit-based networks designed for CIFAR.☆71Updated 2 years ago
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆68Updated 2 years ago
- Pytorch implementation of our paper (TNNLS) -- Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters☆12Updated 3 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆25Updated last year
- Official implementation for paper "DyRep: Bootstrapping Training with Dynamic Re-parameterization", CVPR 2022☆43Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆146Updated 2 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆48Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- ☆24Updated 3 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆40Updated 2 years ago
- Awesome Knowledge-Distillation for CV☆85Updated last year
- PyTorch implementation for Channel Distillation☆100Updated 4 years ago
- Official code for our ECCV'22 paper "A Fast Knowledge Distillation Framework for Visual Recognition"☆187Updated last year
- ☆24Updated 3 years ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆42Updated 3 years ago
- ☆8Updated 3 years ago
- Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.☆63Updated 3 years ago
- EQ-Net [ICCV 2023]☆29Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated last year
- In progress.☆63Updated last year