SonyResearch / IDEAL
Query-Efficient Data-Free Learning from Black-Box Models
☆22Updated 2 years ago
Alternatives and similar repositories for IDEAL
Users that are interested in IDEAL are comparing it to the libraries listed below
Sorting:
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆35Updated 6 months ago
- Official Repository for ResSFL (accepted by CVPR '22)☆21Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆34Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆34Updated 8 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- This is the code of ICLR 2022 Oral paper 'Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Au…☆30Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 5 months ago
- ☆25Updated 3 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆21Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆30Updated 2 years ago
- Camouflage poisoning via machine unlearning☆17Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 8 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆71Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆55Updated last year
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆67Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- ☆34Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆49Updated 2 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆21Updated 3 years ago
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆24Updated 4 years ago
- ☆65Updated last year
- ☆19Updated 7 months ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆41Updated 2 years ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆89Updated last week
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 3 years ago