This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspective. Such topics include data-centric model IP protection, data authorization protection, data copyright protection, and any other data-level technologies that protect the IP of AI.
☆23Oct 30, 2023Updated 2 years ago
Alternatives and similar repositories for Data_Centric_AI_IP_Protection
Users that are interested in Data_Centric_AI_IP_Protection are comparing it to the libraries listed below
Sorting:
- ☆14Feb 26, 2025Updated last year
- ☆22Apr 23, 2024Updated last year
- ☆27Nov 9, 2022Updated 3 years ago
- Federated Learning with New Knowledge -- explore to incorporate various new knowledge into existing FL systems and evolve these systems t…☆86Feb 7, 2024Updated 2 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆28Mar 24, 2025Updated 11 months ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- ☆14Mar 9, 2025Updated 11 months ago
- ☆10Oct 31, 2022Updated 3 years ago
- ☆47Mar 29, 2022Updated 3 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- Benchmark for federated noisy label learning☆25Aug 31, 2024Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15May 18, 2024Updated last year
- ☆20Oct 28, 2025Updated 4 months ago
- Official codes for ACM CIKM '24 full paper: Tackling Noisy Clients in Federated Learning with End-to-end Label Correction☆21Feb 21, 2025Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆17Updated this week
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Jun 3, 2024Updated last year
- ☆33Dec 9, 2021Updated 4 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Jun 29, 2025Updated 8 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 5 months ago
- ☆48Sep 29, 2024Updated last year
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆22Sep 21, 2025Updated 5 months ago
- ☆23Sep 15, 2024Updated last year
- Federated Block Coordinate Descent (FedBCD) code for "Federated Block Coordinate Descent Scheme for Learning Global and Personalized Mode…☆16Dec 27, 2020Updated 5 years ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- [CVPR2023] Federated Incremental Semantic Segmentation☆40Nov 5, 2023Updated 2 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Aug 15, 2022Updated 3 years ago
- ☆20Feb 11, 2024Updated 2 years ago
- ☆18Nov 13, 2021Updated 4 years ago
- Code for paper: "Spinning Language Models: Risks of Propaganda-as-a-Service and Countermeasures"☆21Jun 6, 2022Updated 3 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Feb 28, 2023Updated 3 years ago
- ☆20Jul 22, 2024Updated last year
- A survey on harmful fine-tuning attack for large language model☆232Updated this week