dwang181 / active-mixupLinks
Code for Active Mixup in 2020 CVPR
☆23Updated 3 years ago
Alternatives and similar repositories for active-mixup
Users that are interested in active-mixup are comparing it to the libraries listed below
Sorting:
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago
- Code for "Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources". (IC…☆38Updated 4 years ago
- ☆57Updated 4 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated 2 years ago
- ☆23Updated 5 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆42Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Official Pytorch implementation of MixMo framework☆84Updated 4 years ago
- Paper and Code for "Curriculum Learning by Optimizing Learning Dynamics" (AISTATS 2021)☆19Updated 4 years ago
- PyTorch Implementation of Temporal Output Discrepancy for Active Learning, ICCV 2021☆41Updated 3 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆30Updated 3 years ago
- Official code for "Mean Shift for Self-Supervised Learning"☆57Updated 3 years ago
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆105Updated 3 years ago
- This is a Pytorch implementation of contrastive Learning(CL) baselines.☆14Updated 3 years ago
- PyTorch implementation of the paper "SuperLoss: A Generic Loss for Robust Curriculum Learning" in NIPS 2020.☆29Updated 4 years ago
- This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer☆40Updated 3 years ago
- ☆27Updated 2 years ago
- SKD : Self-supervised Knowledge Distillation for Few-shot Learning☆100Updated last year
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 3 years ago
- [WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"☆28Updated 4 years ago
- Self-supervised Label Augmentation via Input Transformations (ICML 2020)☆105Updated 4 years ago
- [CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning☆85Updated 3 years ago
- PyTorch implementation for our paper EvidentialMix: Learning with Combined Open-set and Closed-set Noisy Labels☆28Updated 4 years ago
- ICLR 2021, "Learning with feature-dependent label noise: a progressive approach"☆43Updated 2 years ago
- 90%+ with 40 labels. please see the readme for details.☆37Updated 5 years ago
- [SafeAI'21] Feature Space Singularity for Out-of-Distribution Detection.☆79Updated 4 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- Evaluating AlexNet features at various depths☆40Updated 4 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Updated 2 years ago