aryanasadianuoit / Distilling-Knowledge-via-Intermediate-ClassifiersLinks
Distilling Knowledge via Intermediate Classifiers
☆15Updated 3 years ago
Alternatives and similar repositories for Distilling-Knowledge-via-Intermediate-Classifiers
Users that are interested in Distilling-Knowledge-via-Intermediate-Classifiers are comparing it to the libraries listed below
Sorting:
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆49Updated 2 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated last year
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 11 months ago
- ☆27Updated 4 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 10 months ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆117Updated 4 years ago
- Pruning Filters For Efficient ConvNets, PyTorch Implementation.☆30Updated 5 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆40Updated 2 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 3 years ago
- ☆34Updated last year
- Feature Fusion for Online Mutual Knowledge Distillation Code☆26Updated 4 years ago
- Graph Knowledge Distillation☆13Updated 5 years ago
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆64Updated 3 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Updated 4 years ago
- An unofficial implementation of 《Deep Mutual Learning》 by Pytorch to do classification on cifar100.☆166Updated 4 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- ZSKD with PyTorch☆31Updated 2 years ago
- a pytorch implement of mobileNet v2 on cifar10☆62Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- This is the implementation for the ICASSP-2022 paper (Confidence-Aware Multi-Teacher Knowledge Distillation).☆59Updated 3 years ago
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆178Updated 3 years ago
- Implementation of Conv-based and Vit-based networks designed for CIFAR.☆70Updated 2 years ago
- PyTorch Implementation of AutoPruner☆23Updated 5 years ago
- PyTorch implementation for Channel Distillation☆101Updated 5 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆29Updated 4 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [CVPR 2020] MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning☆94Updated 2 years ago
- Demonstration of transfer of knowledge and generalization with distillation☆55Updated 6 years ago