Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107
☆20Aug 10, 2024Updated last year
Alternatives and similar repositories for PoisonPrompt
Users that are interested in PoisonPrompt are comparing it to the libraries listed below
Sorting:
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Jul 8, 2024Updated last year
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆34Aug 10, 2024Updated last year
- Official implementation of "Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection" (ICLR 2024)☆18Apr 15, 2024Updated last year
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆19Mar 9, 2025Updated 11 months ago
- This is the code repo of our Pattern Recognition journal on IPR protection of Image Captioning Models☆11Aug 29, 2023Updated 2 years ago
- [NeurIPS'25] Backdoor Cleaning without External Guidance in MLLM Fine-tuning☆17Oct 13, 2025Updated 4 months ago
- ☆28Aug 21, 2023Updated 2 years ago
- Implementation of Self-supervised-Online-Adversarial-Purification☆13Aug 2, 2021Updated 4 years ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models …☆15Sep 12, 2025Updated 5 months ago
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆17Jul 5, 2025Updated 7 months ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- [ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model☆16Feb 24, 2025Updated last year
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆18Oct 7, 2022Updated 3 years ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆16Oct 31, 2023Updated 2 years ago
- Composite Backdoor Attacks Against Large Language Models☆22Apr 12, 2024Updated last year
- ☆19Jun 27, 2021Updated 4 years ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆28Apr 2, 2025Updated 10 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆52Jun 2, 2025Updated 8 months ago
- Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021☆19Dec 6, 2021Updated 4 years ago
- Code for paper "Constrained Convolutional Neural Networks: A New Approach Towards General Purpose Image Manipulation Detection"☆43Sep 26, 2020Updated 5 years ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆276Feb 2, 2026Updated 3 weeks ago
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆44Jul 26, 2021Updated 4 years ago
- ☆21Jul 22, 2024Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- Pytorch deep learning object detection using CINIC-10 dataset.☆22Feb 26, 2020Updated 6 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Oct 25, 2019Updated 6 years ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Sep 18, 2025Updated 5 months ago
- ☆59Jun 5, 2024Updated last year
- Code release for DeepJudge (S&P'22)☆52Mar 14, 2023Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆25Mar 13, 2023Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- ☆26Jun 27, 2024Updated last year