VITA-Group / Nasty-Teacher
[ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang
☆81Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Nasty-Teacher
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 3 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- Smooth Adversarial Training☆67Updated 4 years ago
- ☆26Updated 3 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 4 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆90Updated last year
- ☆56Updated 3 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆37Updated 2 years ago
- Official implementation of "Removing Batch Normalization Boosts Adversarial Training" (ICML'22)☆19Updated 2 years ago
- ☆35Updated 3 years ago
- [ICML 2021] "Efficient Lottery Ticket Finding: Less Data is More" by Zhenyu Zhang*, Xuxi Chen*, Tianlong Chen*, Zhangyang Wang☆25Updated 2 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆43Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆53Updated 2 years ago
- Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes☆22Updated 4 years ago
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)☆70Updated 5 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59Updated 5 years ago
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 4 years ago
- [NeurIPS 2020] “ Robust Pre-Training by Adversarial Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang☆113Updated 2 years ago
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆104Updated 2 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆64Updated 2 years ago
- ☆106Updated 2 years ago
- [ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang☆63Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 3 years ago
- Robust Contrastive Learning Using Negative Samples with Diminished Semantics (NeurIPS 2021)☆39Updated 2 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆95Updated last year
- Code for NeurIPS 2019 Paper☆48Updated 4 years ago
- [NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: …☆49Updated 2 years ago