da2so / Zero-shot_Knowledge_Distillation_Pytorch
ZSKD with PyTorch
☆30Updated last year
Alternatives and similar repositories for Zero-shot_Knowledge_Distillation_Pytorch:
Users that are interested in Zero-shot_Knowledge_Distillation_Pytorch are comparing it to the libraries listed below
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆69Updated 2 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆25Updated 3 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆21Updated 2 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- ☆22Updated 5 years ago
- ☆10Updated 4 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated last year
- ☆21Updated 4 years ago
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆31Updated 5 years ago
- A Pytorch implementation of "Data-Free Learning of Student Networks" (ICCV 2019).☆17Updated 5 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- ☆27Updated 2 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆40Updated 2 years ago
- ☆26Updated 4 years ago
- [NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: …☆49Updated 3 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- Pytorch implementation of Feature Generation for Long-Tail Classification by Rahul Vigneswaran, Marc T Law, Vineeth N Balasubramaniam and…☆38Updated 2 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Updated 8 months ago
- Official [AAAI] Code Repository for "Continual Learning with Scaled Gradient Projection".☆12Updated last year
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆96Updated 2 years ago