xeanzheng / CSKD
Codes for ECCV2020 paper "Improving Knowledge Distillation via Category Structure".
☆10Updated 4 years ago
Alternatives and similar repositories for CSKD
Users that are interested in CSKD are comparing it to the libraries listed below
Sorting:
- Graph Knowledge Distillation☆13Updated 5 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated last year
- Distilling Knowledge via Intermediate Classifiers☆15Updated 3 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆48Updated 2 years ago
- Paper and Code for "Curriculum Learning by Optimizing Learning Dynamics" (AISTATS 2021)☆19Updated 3 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 4 years ago
- Reproducing VID in CVPR2019 (on working)☆20Updated 5 years ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 9 months ago
- ☆48Updated 5 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆41Updated 2 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆26Updated 4 years ago
- ☆26Updated 4 years ago
- ☆33Updated last year
- [ICASSP 2020] Code release of paper 'Heterogeneous Domain Generalization via Domain Mixup'☆26Updated 4 years ago
- Code for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021☆37Updated 3 years ago
- A PyTorch Implementation of Feature Boosting and Suppression☆18Updated 4 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Updated 4 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆75Updated 9 months ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Information Bottleneck Approach to Spatial Attention Learning, IJCAI2021☆15Updated 3 years ago
- ☆9Updated 3 years ago
- Code for CVPR-2019 paper "Progressive Feature Alignment for Unsupervised Domain Adaptation"☆32Updated 5 years ago
- Implementation "Adapting Auxiliary Losses Using Gradient Similarity" article☆32Updated 6 years ago
- Published in IEEE Transactions on Artificial Intelligence☆56Updated 3 years ago
- Distilling knowledge from ensemble of multiple teacher networks to student network with multiple heads☆7Updated 3 years ago
- Code release for "Dynamic Domain Adaptation for Efficient Inference" (CVPR 2021)☆33Updated 3 years ago
- A collection of trends about transfer learning☆17Updated 4 years ago
- ☆10Updated 4 years ago
- Source code for 'Knowledge Distillation via Instance Relationship Graph'☆30Updated 5 years ago