conditionWang / Data_Centric_AI_IP_Protection
This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspective. Such topics include data-centric model IP protection, data authorization protection, data copyright protection, and any other data-level technologies that protect the IP of AI.
☆22Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Data_Centric_AI_IP_Protection
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆28Updated 10 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆44Updated 7 months ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆29Updated last year
- Official Repository for ResSFL (accepted by CVPR '22)☆20Updated 2 years ago
- ☆29Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆29Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆51Updated 7 months ago
- Camouflage poisoning via machine unlearning☆15Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆31Updated 3 weeks ago
- ☆28Updated 2 years ago
- The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consist…☆19Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆55Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- This is the code of ICLR 2022 Oral paper 'Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Au…☆30Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆16Updated 9 months ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆10Updated last year
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆19Updated 3 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆26Updated 2 months ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆40Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- ☆16Updated 6 months ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆15Updated 2 weeks ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆30Updated 2 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆17Updated 7 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated last year
- ☆10Updated 5 months ago