Every practical and proposed defense against prompt injection.
☆684Feb 22, 2025Updated last year
Alternatives and similar repositories for prompt-injection-defenses
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆434Oct 29, 2025Updated 6 months ago
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆76Dec 4, 2025Updated 5 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆474Jan 31, 2024Updated 2 years ago
- Awesome secure by default libraries to help you eliminate bug classes!☆704Dec 6, 2025Updated 5 months ago
- LLM Prompt Injection Detector☆1,471Aug 7, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆122Apr 15, 2024Updated 2 years ago
- One Conference 2024☆110Oct 1, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆96Apr 14, 2026Updated 3 weeks ago
- Proof of concept for an anti-phishing browser plugin, working by comparing pages screenshots with perceptual hashing algorithms.☆10Apr 3, 2022Updated 4 years ago
- a security scanner for custom LLM applications☆1,184Dec 1, 2025Updated 5 months ago
- Dropbox LLM Security research code and results☆257May 21, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆577Jun 8, 2025Updated 11 months ago
- Practical resources for offensive CI/CD security research. Curated the best resources I've seen since 2021.☆586Feb 12, 2026Updated 2 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆548Mar 30, 2026Updated last month
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- LLM Testing Findings Templates☆74Feb 14, 2024Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- The Security Toolkit for LLM Interactions☆2,925Dec 15, 2025Updated 4 months ago
- ☆384Apr 15, 2026Updated 3 weeks ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆31Apr 23, 2024Updated 2 years ago
- Red-Teaming Language Models with DSPy☆256Feb 13, 2025Updated last year
- the LLM vulnerability scanner☆7,756Updated this week
- The official implementation of the paper "AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World …☆50May 2, 2026Updated last week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,785May 2, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Prompt Injection Primer for Engineers☆585Aug 25, 2023Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆488Apr 27, 2026Updated last week
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,228Apr 27, 2026Updated last week
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Gemini☆175Apr 23, 2025Updated last year
- New ways of breaking app-integrated LLMs☆2,083Jul 17, 2025Updated 9 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆138Feb 19, 2025Updated last year
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆33Jun 24, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Universal and Transferable Attacks on Aligned Language Models☆4,644Aug 2, 2024Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,578Aug 20, 2025Updated 8 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆679Feb 16, 2026Updated 2 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆580Feb 27, 2026Updated 2 months ago
- ☆25Jan 17, 2025Updated last year
- ☆34Nov 12, 2024Updated last year
- Gram is Klarna's own threat model diagramming tool☆334Updated this week