Every practical and proposed defense against prompt injection.
☆662Feb 22, 2025Updated last year
Alternatives and similar repositories for prompt-injection-defenses
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆63Dec 4, 2025Updated 3 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆413Oct 29, 2025Updated 5 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆467Jan 31, 2024Updated 2 years ago
- Awesome secure by default libraries to help you eliminate bug classes!☆701Dec 6, 2025Updated 3 months ago
- LLM Prompt Injection Detector☆1,451Aug 7, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆115Apr 15, 2024Updated last year
- One Conference 2024☆111Oct 1, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆91Jul 24, 2025Updated 8 months ago
- Proof of concept for an anti-phishing browser plugin, working by comparing pages screenshots with perceptual hashing algorithms.☆10Apr 3, 2022Updated 3 years ago
- a security scanner for custom LLM applications☆1,152Dec 1, 2025Updated 3 months ago
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆499Mar 12, 2026Updated 2 weeks ago
- Practical resources for offensive CI/CD security research. Curated the best resources I've seen since 2021.☆578Feb 12, 2026Updated last month
- LLM Testing Findings Templates☆75Feb 14, 2024Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- The Security Toolkit for LLM Interactions☆2,737Dec 15, 2025Updated 3 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆569Jun 8, 2025Updated 9 months ago
- ☆382Apr 18, 2024Updated last year
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆31Apr 23, 2024Updated last year
- Red-Teaming Language Models with DSPy☆253Feb 13, 2025Updated last year
- the LLM vulnerability scanner☆7,391Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,593Mar 22, 2026Updated last week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆469Feb 26, 2024Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Prompt Injection Primer for Engineers☆578Aug 25, 2023Updated 2 years ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,164Feb 22, 2026Updated last month
- New ways of breaking app-integrated LLMs☆2,066Jul 17, 2025Updated 8 months ago
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Gemini☆176Apr 23, 2025Updated 11 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆135Feb 19, 2025Updated last year
- Universal and Transferable Attacks on Aligned Language Models☆4,583Aug 2, 2024Updated last year
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆33Jun 24, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- A curation of awesome tools, documents and projects about LLM Security.☆1,554Aug 20, 2025Updated 7 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆660Feb 16, 2026Updated last month
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆576Feb 27, 2026Updated last month
- ☆25Jan 17, 2025Updated last year
- ☆34Nov 12, 2024Updated last year
- Gram is Klarna's own threat model diagramming tool☆332Updated this week
- TAP: An automated jailbreaking method for black-box LLMs☆226Dec 10, 2024Updated last year