zju-vipa / CMI
[IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation
☆68Updated 2 years ago
Alternatives and similar repositories for CMI:
Users that are interested in CMI are comparing it to the libraries listed below
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆67Updated 2 years ago
- ☆83Updated last year
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆95Updated 2 years ago
- Data-Free Knowledge Distillation☆20Updated 2 years ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated last year
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆63Updated 2 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆44Updated 2 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆95Updated 7 months ago
- Efficient Dataset Distillation by Representative Matching☆110Updated 10 months ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆107Updated last year
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆92Updated 2 years ago
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆16Updated last year
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- ☆30Updated 2 years ago
- ☆22Updated last year
- ☆26Updated 3 years ago
- Official pytorch implementation for CVPR2022 paper "Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training"☆17Updated 2 years ago
- ☆60Updated last year
- ☆14Updated last year
- Official PyTorch implementation of PS-KD☆83Updated 2 years ago
- This is the source code for Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score (ICML2023).☆35Updated 3 months ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆52Updated last year
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆46Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated last year
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆22Updated 6 months ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆125Updated 2 months ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆79Updated 11 months ago