tldrsec / prompt-injection-defenses
Every practical and proposed defense against prompt injection.
☆424Updated 2 months ago
Alternatives and similar repositories for prompt-injection-defenses:
Users that are interested in prompt-injection-defenses are comparing it to the libraries listed below
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆378Updated last year
- Dropbox LLM Security research code and results☆222Updated 11 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆188Updated last week
- A benchmark for prompt injection detection systems.☆100Updated 2 months ago
- OWASP Foundation Web Respository☆700Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆464Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆274Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆361Updated last year
- Prompt Injection Primer for Engineers☆430Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆165Updated 4 months ago
- ☆540Updated 4 months ago
- A curated list of large language model tools for cybersecurity research.☆449Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- OWASP Foundation Web Respository☆250Updated last week
- A collection of awesome resources related AI security☆206Updated this week
- ☆265Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,186Updated last week
- Protection against Model Serialization Attacks☆462Updated this week
- Test Software for the Characterization of AI Technologies☆246Updated this week
- The automated prompt injection framework for LLM-integrated applications.☆198Updated 7 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆279Updated 8 months ago
- LLM security and privacy☆48Updated 6 months ago
- a prompt injection scanner for custom LLM applications☆780Updated last month
- LLM Prompt Injection Detector☆1,254Updated 8 months ago
- Awesome LLM Jailbreak academic papers☆93Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆130Updated 3 weeks ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆477Updated 7 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆162Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆588Updated 3 months ago
- Papers about red teaming LLMs and Multimodal models.☆109Updated 5 months ago