MingSun-Tse / Good-DA-in-KDLinks
[NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective
☆37Updated 2 years ago
Alternatives and similar repositories for Good-DA-in-KD
Users that are interested in Good-DA-in-KD are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆66Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆50Updated 4 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 5 months ago
- ☆57Updated 4 years ago
- ☆31Updated 5 years ago
- Official pytorch implementation for CVPR2022 paper "Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training"☆17Updated 3 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆83Updated 4 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆90Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 2 years ago
- The implementation of our paper: Towards Robust Vision Transformer (CVPR2022)☆142Updated 3 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆45Updated 2 years ago
- ☆20Updated 2 years ago
- ☆27Updated 2 years ago
- Pytorch implementation of our paper accepted by ECCV2022 -- Knowledge Condensation Distillation https://arxiv.org/abs/2207.05409☆30Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆75Updated 2 years ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆29Updated 2 years ago
- ☆47Updated 2 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆41Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆73Updated 3 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆45Updated 2 years ago
- [ICLR 2022]: Fast AdvProp☆35Updated 3 years ago
- Awesome Knowledge-Distillation for CV☆89Updated last year
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated 2 years ago
- i-mae Pytorch Repo☆19Updated last year
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago