THUYimingLi / Open-sourced_Dataset_Protection
This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on Dataset Curation and Security, 2020.
☆19Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for Open-sourced_Dataset_Protection
- Defending against Model Stealing via Verifying Embedded External Features☆32Updated 2 years ago
- ☆19Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- ☆19Updated 3 years ago
- ☆19Updated 4 years ago
- ☆17Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆51Updated 7 months ago
- ☆15Updated 2 years ago
- ☆11Updated 2 years ago
- Code for "Label-Consistent Backdoor Attacks"☆49Updated 3 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago
- ☆19Updated 2 years ago
- ☆15Updated 2 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆47Updated last year
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆30Updated 3 years ago
- ☆27Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆50Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆27Updated 3 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆40Updated 2 years ago
- ☆16Updated last year
- Decision-based Adversarial Attack with Frequency Mixup☆21Updated last year
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆12Updated last year
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated last year
- ☆24Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- LiangSiyuan21 / Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-DetectionA Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Updated 3 years ago
- ☆41Updated last year