kuluhan / PRE-DFKD
Official implementation of the work titled "Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay"
☆18Updated 3 years ago
Alternatives and similar repositories for PRE-DFKD
Users that are interested in PRE-DFKD are comparing it to the libraries listed below
Sorting:
- ☆86Updated 2 years ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆54Updated last year
- ☆44Updated 9 months ago
- Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)☆8Updated 3 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆33Updated 7 months ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆30Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 3 years ago
- ☆26Updated 2 years ago
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆12Updated 2 years ago
- ☆11Updated last year
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆13Updated 2 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆67Updated last year
- OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. ICML 2024 and ICLRW-DMLR 2024☆20Updated 9 months ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated last year
- Benchmark for federated noisy label learning☆24Updated 8 months ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- ☆23Updated last year
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆32Updated last year
- ☆30Updated 2 years ago
- ☆16Updated 11 months ago
- [NeurIPS'23] FedL2P: Federated Learning to Personalize☆21Updated 10 months ago
- Code for CVPR22 paper "Deep Unlearning via Randomized Conditionally Independent Hessians"☆25Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated 2 years ago
- ☆63Updated last year
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆21Updated 3 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 2 years ago