Every practical and proposed defense against prompt injection.
☆645Feb 22, 2025Updated last year
Alternatives and similar repositories for prompt-injection-defenses
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below
Sorting:
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆396Oct 29, 2025Updated 4 months ago
- Awesome secure by default libraries to help you eliminate bug classes!☆700Dec 6, 2025Updated 3 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆459Jan 31, 2024Updated 2 years ago
- LLM Prompt Injection Detector☆1,426Aug 7, 2024Updated last year
- a security scanner for custom LLM applications☆1,140Dec 1, 2025Updated 3 months ago
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆61Dec 4, 2025Updated 3 months ago
- One Conference 2024☆111Oct 1, 2024Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Dropbox LLM Security research code and results☆255May 21, 2024Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆454Feb 3, 2026Updated last month
- Practical resources for offensive CI/CD security research. Curated the best resources I've seen since 2021.☆575Feb 12, 2026Updated 3 weeks ago
- Papers and resources related to the security and privacy of LLMs 🤖☆570Jun 8, 2025Updated 9 months ago
- Red-Teaming Language Models with DSPy☆254Feb 13, 2025Updated last year
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆30Apr 23, 2024Updated last year
- The Security Toolkit for LLM Interactions☆2,620Dec 15, 2025Updated 2 months ago
- ☆382Apr 18, 2024Updated last year
- Proof of concept for an anti-phishing browser plugin, working by comparing pages screenshots with perceptual hashing algorithms.☆10Apr 3, 2022Updated 3 years ago
- Prompt Injection Primer for Engineers☆558Aug 25, 2023Updated 2 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆88Jul 24, 2025Updated 7 months ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,527Updated this week
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- the LLM vulnerability scanner☆7,164Updated this week
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,121Feb 22, 2026Updated 2 weeks ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆637Feb 16, 2026Updated 3 weeks ago
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆31Jun 24, 2025Updated 8 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆106Apr 15, 2024Updated last year
- Protection against Model Serialization Attacks☆647Feb 18, 2026Updated 2 weeks ago
- New ways of breaking app-integrated LLMs☆2,055Jul 17, 2025Updated 7 months ago
- LLM Testing Findings Templates☆75Feb 14, 2024Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆459Feb 26, 2024Updated 2 years ago
- Universal and Transferable Attacks on Aligned Language Models☆4,534Aug 2, 2024Updated last year
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Gemini☆176Apr 23, 2025Updated 10 months ago
- ☆119Jul 2, 2024Updated last year
- ☆23Jan 17, 2025Updated last year
- Enumeration/exploit/analysis/download/etc pentesting framework for GCP; modeled like Pacu for AWS; a product of numerous hours via @Webbi…☆288May 16, 2025Updated 9 months ago
- Collection of Semgrep rules for security analysis☆10Mar 30, 2024Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,537Aug 20, 2025Updated 6 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆130Feb 19, 2025Updated last year
- ☆18Jun 11, 2024Updated last year