yxoh / prompt_leak_usenix2024Links
☆12Updated last year
Alternatives and similar repositories for prompt_leak_usenix2024
Users that are interested in prompt_leak_usenix2024 are comparing it to the libraries listed below
Sorting:
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- Camouflage poisoning via machine unlearning☆17Updated last week
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆37Updated 9 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 7 months ago
- ☆24Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆38Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆15Updated 8 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆23Updated last year
- verifying machine unlearning by backdooring☆20Updated 2 years ago
- ☆19Updated 9 months ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated 2 years ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆24Updated 10 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 10 months ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 4 months ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- ☆20Updated last year
- ☆23Updated 2 years ago
- ☆31Updated 3 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆19Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆21Updated last year
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- Query-Efficient Data-Free Learning from Black-Box Models☆22Updated 2 years ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago