grasses / PoisonPromptLinks
Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107
☆17Updated 10 months ago
Alternatives and similar repositories for PoisonPrompt
Users that are interested in PoisonPrompt are comparing it to the libraries listed below
Sorting:
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆38Updated last year
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 10 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆31Updated last month
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 7 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 7 months ago
- ☆22Updated 9 months ago
- ☆28Updated 8 months ago
- ☆21Updated last year
- ☆46Updated last year
- ☆20Updated 6 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆14Updated last month
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 11 months ago
- ☆31Updated 3 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- ☆48Updated 10 months ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- ☆25Updated last month
- ☆18Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆23Updated 9 months ago
- SaTML'23 paper "Backdoor Attacks on Time Series: A Generative Approach" by Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, and James Bail…☆18Updated 2 years ago
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆37Updated 11 months ago
- ☆16Updated 2 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆35Updated last year
- ☆16Updated 3 months ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆24Updated 10 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆22Updated last year
- ☆35Updated 11 months ago