pdparchitect / llm-hacking-databaseLinks
This repository contains various attack against Large Language Models.
☆126Updated last year
Alternatives and similar repositories for llm-hacking-database
Users that are interested in llm-hacking-database are comparing it to the libraries listed below
Sorting:
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆182Updated 2 years ago
- Learn about a type of vulnerability that specifically targets machine learning models☆398Updated 4 months ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆202Updated 3 months ago
- Prompt Injections Everywhere☆176Updated last year
- Payloads for Attacking Large Language Models☆118Updated 2 weeks ago
- Dropbox LLM Security research code and results☆254Updated last year
- Penetration Testing AI Assistant based on open source LLMs.☆115Updated 9 months ago
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracle☆109Updated 2 years ago
- A ChatGPT based penetration testing findings generator.☆134Updated 2 years ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- A LLM explicitly designed for getting hacked☆166Updated 2 years ago
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responses☆112Updated 2 years ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆222Updated 4 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆86Updated 2 weeks ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆93Updated 8 months ago
- Prompt Injection Primer for Engineers☆546Updated 2 years ago
- A curated list of awesome LLM Red Teaming training, resources, and tools.☆72Updated 4 months ago
- Manual Prompt Injection / Red Teaming Tool☆51Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆439Updated 2 years ago
- An AI-powered application that conducts structured interviews to create and maintain detailed personal profiles across various life aspec…☆55Updated 10 months ago
- A curated list of large language model tools for cybersecurity research.☆480Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆152Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆420Updated 6 months ago
- Community curated list of search queries for various products across multiple search engines.☆364Updated this week
- ASCII Smuggling Hidden Prompt Injection is a novel approach to hacking AI assistants using Unicode Tags. This project demostrate how to u…☆18Updated last year
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆152Updated 2 years ago
- Codebase of https://arxiv.org/abs/2410.14923☆54Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆84Updated last year
- ☆53Updated this week
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆642Updated last month