Every practical and proposed defense against prompt injection.
☆673Feb 22, 2025Updated last year
Alternatives and similar repositories for prompt-injection-defenses
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆426Oct 29, 2025Updated 5 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆469Jan 31, 2024Updated 2 years ago
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆75Dec 4, 2025Updated 4 months ago
- Awesome secure by default libraries to help you eliminate bug classes!☆702Dec 6, 2025Updated 4 months ago
- LLM Prompt Injection Detector☆1,459Aug 7, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆118Apr 15, 2024Updated 2 years ago
- One Conference 2024☆111Oct 1, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆95Apr 8, 2026Updated last week
- Proof of concept for an anti-phishing browser plugin, working by comparing pages screenshots with perceptual hashing algorithms.☆10Apr 3, 2022Updated 4 years ago
- a security scanner for custom LLM applications☆1,175Dec 1, 2025Updated 4 months ago
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆571Jun 8, 2025Updated 10 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆527Mar 30, 2026Updated 2 weeks ago
- Practical resources for offensive CI/CD security research. Curated the best resources I've seen since 2021.☆581Feb 12, 2026Updated 2 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- LLM Testing Findings Templates☆73Feb 14, 2024Updated 2 years ago
- The official implementation of the paper "AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World …☆45Apr 9, 2026Updated last week
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- The Security Toolkit for LLM Interactions☆2,832Dec 15, 2025Updated 4 months ago
- ☆382Apr 18, 2024Updated 2 years ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆31Apr 23, 2024Updated last year
- Red-Teaming Language Models with DSPy☆254Feb 13, 2025Updated last year
- the LLM vulnerability scanner☆7,559Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,679Apr 12, 2026Updated last week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Prompt Injection Primer for Engineers☆578Aug 25, 2023Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆473Feb 26, 2024Updated 2 years ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,194Feb 22, 2026Updated last month
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Gemini☆175Apr 23, 2025Updated 11 months ago
- New ways of breaking app-integrated LLMs☆2,067Jul 17, 2025Updated 9 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆136Feb 19, 2025Updated last year
- Universal and Transferable Attacks on Aligned Language Models☆4,613Aug 2, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆33Jun 24, 2025Updated 9 months ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,565Aug 20, 2025Updated 7 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆674Feb 16, 2026Updated 2 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆577Feb 27, 2026Updated last month
- ☆25Jan 17, 2025Updated last year
- ☆34Nov 12, 2024Updated last year
- Gram is Klarna's own threat model diagramming tool☆333Updated this week