da2so / Zero-shot_Knowledge_Distillation_PytorchLinks
ZSKD with PyTorch
☆31Updated 2 years ago
Alternatives and similar repositories for Zero-shot_Knowledge_Distillation_Pytorch
Users that are interested in Zero-shot_Knowledge_Distillation_Pytorch are comparing it to the libraries listed below
Sorting:
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 4 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 5 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- ☆23Updated 6 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆116Updated 2 years ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 4 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆26Updated 4 years ago
- Code for Active Mixup in 2020 CVPR☆23Updated 4 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated 3 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- Knowledge Extraction with No Observable Data (NeurIPS 2019)☆46Updated 6 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆42Updated 2 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆100Updated 3 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 3 years ago
- ☆23Updated 5 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated 2 years ago
- Code for "Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources". (IC…☆38Updated 5 years ago
- Soft-Label Dataset Distillation and Text Dataset Distillation☆74Updated 3 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 6 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆144Updated 5 years ago
- Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"☆43Updated 4 years ago
- Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)☆43Updated 4 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 4 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆76Updated 3 years ago
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆34Updated 5 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 3 years ago
- [SafeAI'21] Feature Space Singularity for Out-of-Distribution Detection.☆79Updated 4 years ago