xeanzheng / CSKDLinks
Codes for ECCV2020 paper "Improving Knowledge Distillation via Category Structure".
☆10Updated 4 years ago
Alternatives and similar repositories for CSKD
Users that are interested in CSKD are comparing it to the libraries listed below
Sorting:
- Graph Knowledge Distillation☆13Updated 5 years ago
- Distilling Knowledge via Intermediate Classifiers☆15Updated 3 years ago
- ☆27Updated 4 years ago
- Implementation "Adapting Auxiliary Losses Using Gradient Similarity" article☆32Updated 6 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- An efficient implementation for ImageNet classification☆17Updated 4 years ago
- ☆19Updated 5 years ago
- Knowledge Extraction with No Observable Data (NeurIPS 2019)☆44Updated 5 years ago
- ZSKD with PyTorch☆31Updated 2 years ago
- ☆22Updated 5 years ago
- (NeurIPS 2019) Deep Model Transferbility from Attribution Maps☆20Updated 5 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆29Updated 3 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Code for Active Mixup in 2020 CVPR☆23Updated 3 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆141Updated 5 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆44Updated 3 years ago
- An Efficient Dataset Condensation Plugin and Its Application to Continual Learning. NeurIPS, 2023.☆11Updated last year
- Code for "Balanced Knowledge Distillation for Long-tailed Learning"☆27Updated last year
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- This repository demonstrates the application of our proposed task-free continual learning method on a synthetic experiment.☆13Updated 6 years ago
- This repo is for our paper: Normalization Techniques in Training DNNs: Methodology, Analysis and Application☆85Updated 4 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 3 years ago
- Learning Representations that Support Robust Transfer of Predictors☆20Updated 3 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Updated 4 years ago
- Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes☆23Updated 5 years ago
- [ICML 2021] "Efficient Lottery Ticket Finding: Less Data is More" by Zhenyu Zhang*, Xuxi Chen*, Tianlong Chen*, Zhangyang Wang☆25Updated 3 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago