da2so / Zero-shot_Knowledge_Distillation_Pytorch
ZSKD with PyTorch
☆30Updated last year
Alternatives and similar repositories for Zero-shot_Knowledge_Distillation_Pytorch:
Users that are interested in Zero-shot_Knowledge_Distillation_Pytorch are comparing it to the libraries listed below
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 2 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 4 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆24Updated 3 years ago
- Tiny Imagenet Visual Recognition Challenge☆36Updated 6 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆108Updated last year
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- This is a Pytorch implementation of contrastive Learning(CL) baselines.☆13Updated 2 years ago
- Code for Active Mixup in 2020 CVPR☆22Updated 3 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆90Updated 2 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Official [AAAI] Code Repository for "Continual Learning with Scaled Gradient Projection".☆11Updated last year
- Knowledge Extraction with No Observable Data (NeurIPS 2019)☆44Updated 5 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆39Updated 2 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆43Updated 3 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆21Updated 2 years ago
- ☆22Updated 4 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated 2 years ago
- ☆22Updated 5 years ago
- Code to reproduce the experiments of "Rethinking Experience Replay: a Bag of Tricks for Continual Learning"☆45Updated last year
- ☆26Updated 3 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆54Updated 2 years ago
- Official codebase for our paper "Joslim: Joint Widths and Weights Optimization for Slimmable Neural Networks"☆12Updated 3 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆40Updated last year
- Official implementation of the work titled "Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay"☆18Updated 2 years ago
- Model Zoos for Continual Learning (ICLR 22)☆43Updated last year
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆31Updated 4 years ago