Piyush-555 / GaussianDistillationLinks
Data-free knowledge distillation using Gaussian noise (NeurIPS paper)
☆15Updated 2 years ago
Alternatives and similar repositories for GaussianDistillation
Users that are interested in GaussianDistillation are comparing it to the libraries listed below
Sorting:
- This is a Pytorch implementation of contrastive Learning(CL) baselines.☆14Updated 2 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆25Updated 3 years ago
- Official Implementation for PlugIn Inversion☆16Updated 3 years ago
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆18Updated 3 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated 2 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- ZSKD with PyTorch☆31Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Updated 4 years ago
- [ICML 2021] "Efficient Lottery Ticket Finding: Less Data is More" by Zhenyu Zhang*, Xuxi Chen*, Tianlong Chen*, Zhangyang Wang☆25Updated 3 years ago
- ☆27Updated 2 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated 2 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 3 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59Updated 6 years ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Updated 3 years ago
- Experiments from "The Generalization-Stability Tradeoff in Neural Network Pruning": https://arxiv.org/abs/1906.03728.☆14Updated 4 years ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 3 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆113Updated last year
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- Code for Active Mixup in 2020 CVPR☆23Updated 3 years ago
- Code for ECCV 2022 paper "DICE: Leveraging Sparsification for Out-of-Distribution Detection"☆40Updated 2 years ago
- An Efficient Dataset Condensation Plugin and Its Application to Continual Learning. NeurIPS, 2023.☆11Updated last year
- Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes☆23Updated 5 years ago
- ☆13Updated 3 years ago
- Official repository for "On Improving Adversarial Transferability of Vision Transformers" (ICLR 2022--Spotlight)☆72Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 5 years ago