bhheo / BSS_distillation
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
☆70Updated 5 years ago
Related projects: ⓘ
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆104Updated 5 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 4 years ago
- ☆60Updated 4 years ago
- Unofficial pytorch implementation of Born-Again Neural Networks.☆52Updated 3 years ago
- ☆49Updated 5 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆72Updated last year
- Self-supervised Label Augmentation via Input Transformations (ICML 2020)☆103Updated 3 years ago
- Source code for 'Knowledge Distillation via Instance Relationship Graph'☆28Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Updated 5 years ago
- DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks https://arxiv.org/abs/1901.09229☆66Updated 3 years ago
- [ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy☆40Updated 5 years ago
- Learning Metrics from Teachers: Compact Networks for Image Embedding (CVPR19)☆76Updated 5 years ago
- ☆17Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆64Updated 2 years ago
- ☆25Updated 5 years ago
- ☆25Updated 5 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆80Updated 2 years ago
- Source code accompanying our CVPR 2019 paper: "NetTailor: Tuning the architecture, not just the weights."☆52Updated 3 years ago
- [CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning☆84Updated 2 years ago
- PyTorch implementation of Weighted Batch-Normalization layers☆37Updated 4 years ago
- PyTorch implementation for GAL.☆55Updated 4 years ago
- ☆14Updated 2 years ago
- A pytorch reimplement of paper "Momentum Contrast for Unsupervised Visual Representation Learning"☆43Updated 4 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 4 years ago
- Unsupervised Domain Adaptation through Self-Supervision☆79Updated 2 years ago
- Triplet Loss for Knowledge Distillation☆17Updated 2 years ago
- Lifelong Learning via Progressive Distillation and Retrospection☆14Updated 5 years ago
- Rethinking Feature Distribution for Loss Functions in Image Classification☆40Updated last year
- Meta-Learning based Noise-Tolerant Training☆122Updated 4 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆63Updated 5 years ago