bolianchen / Data-Free-Learning-of-Student-Networks
☆20Updated 3 years ago
Related projects: ⓘ
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- ☆29Updated 4 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆95Updated last year
- ☆26Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆80Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆65Updated 2 years ago
- ☆105Updated 2 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆43Updated 2 years ago
- This is the repo for the paper "Episodic Training for Domain Generalization" https://arxiv.org/abs/1902.00113☆54Updated last year
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆45Updated last year
- ☆20Updated 10 months ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆28Updated 2 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 3 years ago
- PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"☆49Updated 3 years ago
- Data-Free Knowledge Distillation☆19Updated 2 years ago
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 3 years ago
- ☆32Updated 3 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆47Updated last year
- Codes for our ICLR2020 paper: Knowledge Consistency between Neural Networks and Beyond☆16Updated 4 years ago
- [ CVPR 2021 Oral ] Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution"☆101Updated 3 years ago
- ☆58Updated 2 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆12Updated last year
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆72Updated last year
- [CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition☆62Updated last year
- This is the code of CVPR'20 paper "Distilling Cross-Task Knowledge via Relationship Matching".☆48Updated 3 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆89Updated last year
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆61Updated last year
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆66Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆88Updated last year
- Official implementation of "Removing Batch Normalization Boosts Adversarial Training" (ICML'22)☆19Updated 2 years ago