AnTuo1998 / AE-KD
☆26Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for AE-KD
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆80Updated 2 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆73Updated last year
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 4 years ago
- NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"☆38Updated 2 years ago
- Code for our paper: Samuel and Chechik, "Distributional Robustness Loss for Long-tail Learning"☆29Updated 2 years ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆31Updated 3 months ago
- Graph Knowledge Distillation☆13Updated 4 years ago
- [CVPR 2020] Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition from a Domain Adaptation Perspective☆24Updated 4 years ago
- Code release for Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning (NeurIPS 2019)☆24Updated 2 years ago
- ☆58Updated 2 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 2 years ago
- [CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition☆62Updated last year
- ☆29Updated 3 years ago
- [ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang☆63Updated 2 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆48Updated last year
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 3 years ago
- ☆14Updated last year
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆50Updated 3 years ago
- ☆33Updated last year
- ☆60Updated 4 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆24Updated 4 years ago
- [ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization☆40Updated 3 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆42Updated last year
- Code for "Balanced Knowledge Distillation for Long-tailed Learning"☆27Updated last year
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- Learning with Instance-Dependent Label Noise: A Sample Sieve Approach (ICLR2021)☆34Updated 3 years ago
- [ICASSP 2020] Code release of paper 'Heterogeneous Domain Generalization via Domain Mixup'☆24Updated 4 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆41Updated 2 years ago