Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107
☆21Aug 10, 2024Updated last year
Alternatives and similar repositories for PoisonPrompt
Users that are interested in PoisonPrompt are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Jul 8, 2024Updated last year
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models …☆15Sep 12, 2025Updated 7 months ago
- Composite Backdoor Attacks Against Large Language Models☆25Apr 12, 2024Updated 2 years ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆30Apr 2, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆297Mar 13, 2026Updated last month
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆55Jun 2, 2025Updated 10 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆19Mar 9, 2025Updated last year
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- This is the code repo of our Pattern Recognition journal on IPR protection of Image Captioning Models☆11Aug 29, 2023Updated 2 years ago
- [ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.☆39Feb 25, 2025Updated last year
- ☆26Jun 27, 2024Updated last year
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆45Jul 26, 2021Updated 4 years ago
- ☆29Aug 21, 2023Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆37Oct 17, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- [SIGIR'25] Code of "Generative Recommender with End-to-End Learnable Item Tokenization".☆27Apr 17, 2025Updated last year
- Implementation of Self-supervised-Online-Adversarial-Purification☆13Aug 2, 2021Updated 4 years ago
- Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer"☆46Oct 12, 2022Updated 3 years ago
- Code of paper: xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM Jailbreaking"☆18Apr 3, 2026Updated 3 weeks ago
- Code for paper "Constrained Convolutional Neural Networks: A New Approach Towards General Purpose Image Manipulation Detection"☆43Sep 26, 2020Updated 5 years ago
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- VectorDefense: Vectorization as a Defense to Adversarial Examples --->☆13May 3, 2018Updated 7 years ago
- rosploit tools☆16Apr 9, 2021Updated 5 years ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆176Feb 20, 2024Updated 2 years ago
- Learn Haskell☆13Jun 10, 2022Updated 3 years ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- ☆27Jun 5, 2024Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- This is the code implementation for the paper "Data Poisoning Attacks to Deep Learning Based Recommender Systems"☆17Sep 8, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆16Aug 7, 2025Updated 8 months ago
- Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing☆14Feb 18, 2021Updated 5 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆24Mar 23, 2024Updated 2 years ago
- Implementation of a Siamese Neural Network (in Tensorflow) that defines a similarity score between a pair of person images.☆12Sep 25, 2020Updated 5 years ago
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆40Oct 16, 2025Updated 6 months ago
- The official code of the paper: Semantic-guided Multi-mask Image Harmonization (ECCV2022)☆15Jul 20, 2022Updated 3 years ago
- Official implementation of the EMNLP 2021 paper "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks"☆37Nov 3, 2021Updated 4 years ago