bhheo / BSS_distillation
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
☆71Updated 5 years ago
Alternatives and similar repositories for BSS_distillation:
Users that are interested in BSS_distillation are comparing it to the libraries listed below
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- ☆61Updated 5 years ago
- ☆50Updated 5 years ago
- [ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy☆40Updated 5 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 4 years ago
- [CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning☆85Updated 3 years ago
- Learning Metrics from Teachers: Compact Networks for Image Embedding (CVPR19)☆76Updated 6 years ago
- Unofficial pytorch implementation of Born-Again Neural Networks.☆53Updated 4 years ago
- Self-supervised Label Augmentation via Input Transformations (ICML 2020)☆106Updated 4 years ago
- ☆25Updated 5 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated last year
- Zero-Shot Knowledge Distillation in Deep Networks☆65Updated 3 years ago
- Source code accompanying our CVPR 2019 paper: "NetTailor: Tuning the architecture, not just the weights."☆52Updated 3 years ago
- Lifelong Learning via Progressive Distillation and Retrospection☆14Updated 6 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- ☆25Updated 5 years ago
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 4 years ago
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 4 years ago
- Smooth Adversarial Training☆67Updated 4 years ago
- Official Implementation of MEAL: Multi-Model Ensemble via Adversarial Learning on AAAI 2019☆177Updated 5 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆141Updated 5 years ago
- ICML'19 How does Disagreement Help Generalization against Label Corruption?☆84Updated 5 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆66Updated 5 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Updated 2 years ago
- The code of "Adversarial Metric Attack for Person Re-identification"☆32Updated 6 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- Code for our paper "Informative Dropout for Robust Representation Learning: A Shape-bias Perspective" (ICML 2020)☆125Updated 2 years ago
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Updated 5 years ago
- Deeply-supervised Knowledge Synergy (CVPR'2019)☆67Updated 3 years ago
- DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks https://arxiv.org/abs/1901.09229☆66Updated 4 years ago