PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. π Best Paper Awards @ NeurIPS ML Safety Workshop 2022
β465Feb 26, 2024Updated 2 years ago
Alternatives and similar repositories for PromptInject
Users that are interested in PromptInject are comparing it to the libraries listed below
Sorting:
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ573Feb 27, 2026Updated 3 weeks ago
- New ways of breaking app-integrated LLMsβ2,063Jul 17, 2025Updated 8 months ago
- Universal and Transferable Attacks on Aligned Language Modelsβ4,568Aug 2, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMsβ409Oct 29, 2025Updated 4 months ago
- A curation of awesome tools, documents and projects about LLM Security.β1,548Aug 20, 2025Updated 7 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Mβ¦β434Jan 22, 2025Updated last year
- the LLM vulnerability scannerβ7,312Updated this week
- β704Jul 2, 2025Updated 8 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β347Feb 12, 2024Updated 2 years ago
- Curation of prompts that are known to be adversarial to large language modelsβ190Feb 12, 2023Updated 3 years ago
- LLM Prompt Injection Detectorβ1,445Aug 7, 2024Updated last year
- The automated prompt injection framework for LLM-integrated applications.β258Sep 12, 2024Updated last year
- a security scanner for custom LLM applicationsβ1,149Dec 1, 2025Updated 3 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusalβ879Aug 16, 2024Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ465Jan 31, 2024Updated 2 years ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignmentβ108Mar 8, 2024Updated 2 years ago
- Dropbox LLM Security research code and resultsβ256May 21, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]β380Jan 23, 2025Updated last year
- β60Mar 9, 2023Updated 3 years ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Promptingβ20Mar 25, 2024Updated last year
- We jailbreak GPT-3.5 Turboβs safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20β¦β345Feb 23, 2024Updated 2 years ago
- The Security Toolkit for LLM Interactionsβ2,699Dec 15, 2025Updated 3 months ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"β42Jul 8, 2024Updated last year
- β197Nov 26, 2023Updated 2 years ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)β89May 14, 2024Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ652Feb 16, 2026Updated last month
- Repository for "StrongREJECT for Empty Jailbreaks" paperβ154Nov 3, 2024Updated last year
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engβ¦β3,556Updated this week
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".β70Oct 23, 2024Updated last year
- autoredteam: code for training models that automatically red team other language modelsβ15Aug 9, 2023Updated 2 years ago
- Papers and resources related to the security and privacy of LLMs π€β567Jun 8, 2025Updated 9 months ago
- TAP: An automated jailbreaking method for black-box LLMsβ224Dec 10, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]β553Apr 4, 2025Updated 11 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilitiesβ34May 27, 2024Updated last year
- β98Oct 15, 2023Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,827Jun 17, 2025Updated 9 months ago
- Every practical and proposed defense against prompt injection.β659Feb 22, 2025Updated last year
- LLM prompt attacks for hacker CTFs via CTFd.β15Dec 17, 2023Updated 2 years ago
- A framework to evaluate the generalization capability of safety alignment for LLMsβ625Oct 9, 2025Updated 5 months ago