This technique modifies image data so that any model trained on it will bear an identifiable mark.
☆44Aug 13, 2021Updated 4 years ago
Alternatives and similar repositories for radioactive_data
Users that are interested in radioactive_data are comparing it to the libraries listed below
Sorting:
- Adapting the "Radioactive Data" paper to work for text models☆12Dec 23, 2020Updated 5 years ago
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Jul 22, 2023Updated 2 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Mar 20, 2024Updated last year
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Feb 8, 2021Updated 5 years ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆18May 28, 2020Updated 5 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- ☆32Sep 2, 2024Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 4 years ago
- Code for Auditing DPSGD☆37Feb 15, 2022Updated 4 years ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆28Mar 24, 2025Updated 11 months ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- Code for Auditing Data Provenance in Text-Generation Models (in KDD 2019)☆10Jun 18, 2019Updated 6 years ago
- [ICLR 2025] REFINE: Inversion-Free Backdoor Defense via Model Reprogramming☆12Feb 13, 2025Updated last year
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- How should we evaluate supervised hashing☆28Oct 11, 2018Updated 7 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- [CVPR2025] We present SleeperMark, a novel framework designed to embed resilient watermarks into T2I diffusion models☆37May 26, 2025Updated 9 months ago
- ☆15Apr 7, 2023Updated 2 years ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- ☆12Dec 9, 2020Updated 5 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- Bullseye Polytope Clean-Label Poisoning Attack☆15Nov 5, 2020Updated 5 years ago
- Code for the paper "Watermarking Makes Language Models Radioactive"☆21Oct 25, 2024Updated last year
- Code for our S&P'21 paper: Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding☆53Nov 15, 2022Updated 3 years ago
- This repository is the implementation of Deep Dirichlet Process Mixture Models (UAI 2022)☆15May 19, 2022Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- ☆21Sep 16, 2024Updated last year
- ☆16Dec 3, 2021Updated 4 years ago
- ☆12Jun 8, 2021Updated 4 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- Source code for "Neural Anisotropy Directions"☆16Nov 17, 2020Updated 5 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33May 18, 2021Updated 4 years ago