FonduAI / awesome-prompt-injectionLinks
Learn about a type of vulnerability that specifically targets machine learning models
☆328Updated last year
Alternatives and similar repositories for awesome-prompt-injection
Users that are interested in awesome-prompt-injection are comparing it to the libraries listed below
Sorting:
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆166Updated 2 years ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆361Updated 3 weeks ago
- A collection of awesome resources related AI security☆289Updated last week
- A LLM explicitly designed for getting hacked☆158Updated 2 years ago
- Prompt Injections Everywhere☆142Updated last year
- Prompt Injection Primer for Engineers☆460Updated 2 years ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- Payloads for Attacking Large Language Models☆96Updated 2 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆622Updated 2 weeks ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆297Updated last year
- CTF challenges designed and implemented in machine learning applications☆167Updated last year
- some prompt about cyber security☆237Updated 2 years ago
- A curated list of large language model tools for cybersecurity research.☆470Updated last year
- Penetration Testing AI Assistant based on open source LLMs.☆94Updated 4 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆409Updated last year
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆24Updated last year
- A curated list of awesome LLM Red Teaming training, resources, and tools.☆29Updated last month
- ☆317Updated 2 months ago
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆294Updated 3 months ago
- Every practical and proposed defense against prompt injection.☆532Updated 6 months ago
- Dropbox LLM Security research code and results☆233Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆46Updated last year
- This repository contains various attack against Large Language Models.☆113Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.☆49Updated 7 months ago
- a security scanner for custom LLM applications☆939Updated last week
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆706Updated last month
- ☆604Updated last month
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆152Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆406Updated last year
- Manual Prompt Injection / Red Teaming Tool☆37Updated 10 months ago