PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. π Best Paper Awards @ NeurIPS ML Safety Workshop 2022
β455Feb 26, 2024Updated 2 years ago
Alternatives and similar repositories for PromptInject
Users that are interested in PromptInject are comparing it to the libraries listed below
Sorting:
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ568Updated this week
- New ways of breaking app-integrated LLMsβ2,053Jul 17, 2025Updated 7 months ago
- Universal and Transferable Attacks on Aligned Language Modelsβ4,521Aug 2, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMsβ395Oct 29, 2025Updated 4 months ago
- A curation of awesome tools, documents and projects about LLM Security.β1,530Aug 20, 2025Updated 6 months ago
- the LLM vulnerability scannerβ7,088Updated this week
- Curation of prompts that are known to be adversarial to large language modelsβ189Feb 12, 2023Updated 3 years ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β344Feb 12, 2024Updated 2 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Mβ¦β430Jan 22, 2025Updated last year
- β698Jul 2, 2025Updated 8 months ago
- The automated prompt injection framework for LLM-integrated applications.β254Sep 12, 2024Updated last year
- LLM Prompt Injection Detectorβ1,423Aug 7, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusalβ864Aug 16, 2024Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ458Jan 31, 2024Updated 2 years ago
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ635Feb 16, 2026Updated last week
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignmentβ108Mar 8, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs π€β566Jun 8, 2025Updated 8 months ago
- Dropbox LLM Security research code and resultsβ255May 21, 2024Updated last year
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engβ¦β3,468Updated this week
- autoredteam: code for training models that automatically red team other language modelsβ15Aug 9, 2023Updated 2 years ago
- The Security Toolkit for LLM Interactionsβ2,584Dec 15, 2025Updated 2 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilitiesβ33May 27, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]β377Jan 23, 2025Updated last year
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shapingβ10Feb 27, 2020Updated 6 years ago
- We jailbreak GPT-3.5 Turboβs safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20β¦β341Feb 23, 2024Updated 2 years ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".β69Oct 23, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMsβ221Dec 10, 2024Updated last year
- using ML models for red teamingβ42Aug 9, 2023Updated 2 years ago
- Code to generate NeuralExecs (prompt injection for LLMs)β27Oct 5, 2025Updated 4 months ago
- Every practical and proposed defense against prompt injection.β642Feb 22, 2025Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.β815Mar 27, 2025Updated 11 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,816Jun 17, 2025Updated 8 months ago
- A framework to evaluate the generalization capability of safety alignment for LLMsβ624Oct 9, 2025Updated 4 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Modelsβ57Aug 17, 2024Updated last year
- β196Nov 26, 2023Updated 2 years ago
- Prompts Methods to find the vulnerabilities in Generative Modelsβ20Feb 23, 2023Updated 3 years ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)β90May 14, 2024Updated last year
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"β26Jan 7, 2022Updated 4 years ago
- Protection against Model Serialization Attacksβ646Feb 18, 2026Updated last week