haolibai / Cross-DistillationLinks
Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.
☆32Updated 5 years ago
Alternatives and similar repositories for Cross-Distillation
Users that are interested in Cross-Distillation are comparing it to the libraries listed below
Sorting:
- ☆31Updated 5 years ago
- Pytorch implementation of our paper accepted by TPAMI 2023 — Lottery Jackpots Exist in Pre-trained Models☆34Updated 2 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆21Updated 4 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated 2 years ago
- [CVPR 2021] Contrastive Neural Architecture Search with Neural Architecture Comparators☆41Updated 3 years ago
- ☆22Updated 5 years ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 11 months ago
- ☆48Updated 5 years ago
- Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"☆43Updated 4 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆37Updated 3 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)☆71Updated 5 years ago
- A PyTorch Implementation of Feature Boosting and Suppression☆18Updated 4 years ago
- [NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: …☆50Updated 3 years ago
- ☆10Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Codes for AAAI2019 paper: Deep Neural Network Quantization via Layer-Wise Optimization using Limited Training Data☆41Updated 6 years ago
- Breaking the Curse of Space Explosion: Towards Efficient NAS with Curriculum Search☆16Updated 11 months ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 3 years ago
- Official code of "NAS acceleration via proxy data", IJCAI21☆10Updated 3 years ago
- PyTorch implementation for GAL.☆56Updated 5 years ago
- code for NASViT☆67Updated 3 years ago
- Global Sparse Momentum SGD for pruning very deep neural networks☆44Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆50Updated 3 years ago
- ☆57Updated 4 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Updated 2 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- ☆20Updated 2 years ago
- Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc☆25Updated 4 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆44Updated 3 years ago