zju-vipa / MosaicKDLinks
[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
☆45Updated 3 years ago
Alternatives and similar repositories for MosaicKD
Users that are interested in MosaicKD are comparing it to the libraries listed below
Sorting:
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- ☆24Updated 2 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆76Updated 3 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆67Updated 3 years ago
- Official PyTorch implementation of PS-KD☆89Updated 3 years ago
- ☆28Updated 4 years ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆72Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆116Updated 2 years ago
- [ICLR 2022]: Fast AdvProp☆35Updated 3 years ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Code for ECCV 2022 paper "DICE: Leveraging Sparsification for Out-of-Distribution Detection"☆41Updated 3 years ago
- Official PyTorch implementation of "Loss-Curvature Matching for Dataset Selection and Condensation" (AISTATS 2023)☆22Updated 2 years ago
- ☆42Updated 2 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 4 years ago
- Code Release for "Self-supervised Learning is More Robust to Dataset Imbalance"☆39Updated 3 years ago
- [ICLR 2023 Spotlight] Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box Predictors☆39Updated 2 years ago
- Official Pytorch implementation of "Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral)"☆105Updated 2 years ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆35Updated last year
- ☆33Updated 4 years ago
- [CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition☆61Updated 3 years ago
- ☆37Updated 3 years ago
- ☆89Updated 3 years ago
- ☆107Updated 4 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 4 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆100Updated 3 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆47Updated 2 years ago
- ☆27Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Code for our paper: Samuel and Chechik, "Distributional Robustness Loss for Long-tail Learning"☆32Updated 4 years ago