tldrsec / prompt-injection-defensesView external linksLinks
Every practical and proposed defense against prompt injection.
☆630Feb 22, 2025Updated 11 months ago
Alternatives and similar repositories for prompt-injection-defenses
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below
Sorting:
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆391Oct 29, 2025Updated 3 months ago
- Awesome secure by default libraries to help you eliminate bug classes!☆699Dec 6, 2025Updated 2 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆454Jan 31, 2024Updated 2 years ago
- LLM Prompt Injection Detector☆1,415Aug 7, 2024Updated last year
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆59Dec 4, 2025Updated 2 months ago
- a security scanner for custom LLM applications☆1,126Dec 1, 2025Updated 2 months ago
- One Conference 2024☆111Oct 1, 2024Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Dropbox LLM Security research code and results☆254May 21, 2024Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆431Feb 3, 2026Updated 2 weeks ago
- Practical resources for offensive CI/CD security research. Curated the best resources I've seen since 2021.☆570Jan 28, 2026Updated 2 weeks ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆29Apr 23, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆563Jun 8, 2025Updated 8 months ago
- The Security Toolkit for LLM Interactions☆2,537Dec 15, 2025Updated 2 months ago
- ☆381Apr 18, 2024Updated last year
- Proof of concept for an anti-phishing browser plugin, working by comparing pages screenshots with perceptual hashing algorithms.☆10Apr 3, 2022Updated 3 years ago
- Prompt Injection Primer for Engineers☆547Aug 25, 2023Updated 2 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆84Jul 24, 2025Updated 6 months ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,408Feb 10, 2026Updated last week
- Red-Teaming Language Models with DSPy☆251Feb 13, 2025Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆68Oct 23, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆631Updated this week
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆31Jun 24, 2025Updated 7 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,076Feb 3, 2026Updated 2 weeks ago
- the LLM vulnerability scanner☆6,989Updated this week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆104Apr 15, 2024Updated last year
- New ways of breaking app-integrated LLMs☆2,052Jul 17, 2025Updated 7 months ago
- LLM Testing Findings Templates☆75Feb 14, 2024Updated 2 years ago
- Protection against Model Serialization Attacks☆645Nov 24, 2025Updated 2 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,493Aug 2, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆455Feb 26, 2024Updated last year
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Gemini☆176Apr 23, 2025Updated 9 months ago
- ☆117Jul 2, 2024Updated last year
- ☆23Jan 17, 2025Updated last year
- Enumeration/exploit/analysis/download/etc pentesting framework for GCP; modeled like Pacu for AWS; a product of numerous hours via @Webbi…☆284May 16, 2025Updated 9 months ago
- Collection of Semgrep rules for security analysis☆10Mar 30, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Feb 19, 2025Updated 11 months ago
- A research project to add some brrrrrr to Burp☆197Feb 10, 2025Updated last year