yxoh / prompt_leak_usenix2024Links
☆12Updated last year
Alternatives and similar repositories for prompt_leak_usenix2024
Users that are interested in prompt_leak_usenix2024 are comparing it to the libraries listed below
Sorting:
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆15Updated 7 months ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 3 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆37Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 6 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆36Updated 9 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- ☆25Updated 2 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆23Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆19Updated 8 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 9 months ago
- ☆23Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated 2 years ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated last year
- Camouflage poisoning via machine unlearning☆17Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 9 months ago
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆10Updated 8 months ago
- ☆31Updated 3 years ago
- verifying machine unlearning by backdooring☆20Updated 2 years ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆14Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- ☆29Updated 11 months ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated last year
- ☆20Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆22Updated last year
- ☆1Updated last year
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆15Updated 3 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- ☆27Updated last year