tldrsec / prompt-injection-defenses
Every practical and proposed defense against prompt injection.
☆372Updated 7 months ago
Alternatives and similar repositories for prompt-injection-defenses:
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆339Updated 11 months ago
- Dropbox LLM Security research code and results☆219Updated 7 months ago
- Prompt Injection Primer for Engineers☆403Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆429Updated 3 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆247Updated 11 months ago
- A benchmark for prompt injection detection systems.☆94Updated 4 months ago
- OWASP Foundation Web Respository☆220Updated this week
- OWASP Foundation Web Respository☆621Updated this week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆323Updated 10 months ago
- ☆461Updated last month
- Test Software for the Characterization of AI Technologies☆235Updated this week
- A collection of awesome resources related AI security☆154Updated 3 weeks ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆261Updated 4 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- ☆360Updated 9 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆154Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆276Updated last month
- A curated list of large language model tools for cybersecurity research.☆414Updated 9 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆163Updated this week
- ☆114Updated last month
- source for llmsec.net☆13Updated 5 months ago
- ☆192Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆59Updated last month
- Protection against Model Serialization Attacks☆361Updated this week
- A LLM explicitly designed for getting hacked☆134Updated last year
- automatically tests prompt injection attacks on ChatGPT instances☆681Updated last year
- LLM security and privacy☆43Updated 3 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆210Updated 6 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆247Updated 3 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆569Updated this week