kuluhan / PRE-DFKD
Official implementation of the work titled "Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay"
☆18Updated 2 years ago
Alternatives and similar repositories for PRE-DFKD:
Users that are interested in PRE-DFKD are comparing it to the libraries listed below
- ☆86Updated 2 years ago
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆54Updated last year
- [ICLR 2022] Official Code Repository for "TRGP: TRUST REGION GRADIENT PROJECTION FOR CONTINUAL LEARNING"☆21Updated 2 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆33Updated 7 months ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆21Updated 2 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Code for NeurIPS 2021 paper "Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning".☆16Updated 3 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- ☆13Updated 2 years ago
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆12Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆67Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆17Updated 2 months ago
- ☆26Updated 4 years ago
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆14Updated 2 years ago
- ☆11Updated last year
- PyTorch implementation of our NeurIPS 2021 paper "Class-Incremental Learning via Dual Augmentation"☆37Updated 2 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 2 years ago
- OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. ICML 2024 and ICLRW-DMLR 2024☆20Updated 9 months ago
- Benchmark for federated noisy label learning☆23Updated 7 months ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆46Updated 3 years ago
- ☆18Updated 2 years ago
- ☆23Updated last year
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆32Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- Code for the paper "Representational Continuity for Unsupervised Continual Learning" (ICLR 22)☆96Updated 2 years ago