VainF / Data-Free-Adversarial-Distillation
Code and pretrained models for paper: Data-Free Adversarial Distillation
☆99Updated 2 years ago
Alternatives and similar repositories for Data-Free-Adversarial-Distillation:
Users that are interested in Data-Free-Adversarial-Distillation are comparing it to the libraries listed below
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated last year
- ☆107Updated 3 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆69Updated 2 years ago
- ☆30Updated 4 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆90Updated 2 years ago
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)☆71Updated 5 years ago
- An unofficial implementation of 《Deep Mutual Learning》 by Pytorch to do classification on cifar100.☆167Updated 4 years ago
- Knowledge Amalgamation Engine☆98Updated last year
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- ☆22Updated 4 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆236Updated 2 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆141Updated 5 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆117Updated 4 years ago
- [CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks☆124Updated 4 years ago
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆103Updated 3 years ago
- Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning.☆151Updated 2 years ago
- PyTorch implementation for Channel Distillation☆100Updated 4 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Updated 3 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆69Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 4 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- ☆33Updated last year
- Implementation for the paper (CVPR Oral): High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks☆237Updated 4 years ago
- Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)☆107Updated 4 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆44Updated 2 years ago
- Code for our paper "Informative Dropout for Robust Representation Learning: A Shape-bias Perspective" (ICML 2020)☆125Updated 2 years ago
- Pytorch implementation of our paper accepted by TPAMI 2023 — Lottery Jackpots Exist in Pre-trained Models☆34Updated last year