conditionWang / NTLLinks
This is the code of ICLR 2022 Oral paper 'Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization'.
☆30Updated 2 years ago
Alternatives and similar repositories for NTL
Users that are interested in NTL are comparing it to the libraries listed below
Sorting:
- This is the official code for "Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better"☆46Updated 4 years ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 2 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 3 years ago
- Code for CVPR22 paper "Deep Unlearning via Randomized Conditionally Independent Hessians"☆25Updated 3 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- Data-Efficient Backdoor Attacks☆20Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆34Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Updated 2 years ago
- Robustify Black-Box Models (ICLR'22 - Spotlight)☆24Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- [NeurIPS 2021] "Class-Disentanglement and Applications in Adversarial Detection and Defense"☆46Updated 3 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆22Updated 3 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- ☆34Updated 3 years ago
- ☆89Updated 2 years ago
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆16Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Updated 3 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59Updated 6 years ago
- Camouflage poisoning via machine unlearning☆18Updated 5 months ago
- ☆21Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- ☆21Updated last year
- Removing Adversarial Noise in Class Activation Feature Space☆14Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- ☆58Updated 3 years ago