deadbits / vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
☆308Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for vigil-llm
- Dropbox LLM Security research code and results☆216Updated 5 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆398Updated 3 weeks ago
- Every practical and proposed defense against prompt injection.☆339Updated 5 months ago
- A curated list of large language model tools for cybersecurity research.☆390Updated 7 months ago
- OWASP Foundation Web Respository☆206Updated last week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆238Updated 3 weeks ago
- Protection against Model Serialization Attacks☆313Updated this week
- A collection of awesome resources related AI security☆123Updated 7 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆231Updated 8 months ago
- Prompt Injection Primer for Engineers☆357Updated last year
- OWASP Foundation Web Respository☆567Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆149Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆548Updated this week
- Test Software for the Characterization of AI Technologies☆225Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆106Updated 7 months ago
- The Security Toolkit for LLM Interactions☆1,231Updated 2 weeks ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆141Updated 2 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆306Updated 8 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆121Updated last year
- ☆184Updated 3 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆248Updated 2 months ago
- OWASP Machine Learning Security Top 10 Project☆76Updated 2 months ago
- ☆402Updated 2 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆181Updated 4 months ago
- A LLM explicitly designed for getting hacked☆129Updated last year
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆193Updated 8 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆44Updated 4 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago
- automatically tests prompt injection attacks on ChatGPT instances☆639Updated 11 months ago
- An overview of LLMs for cybersecurity.☆408Updated last month