user1342 / Awesome-LLM-Red-TeamingLinks
A curated list of awesome LLM Red Teaming training, resources, and tools.
β22Updated 3 weeks ago
Alternatives and similar repositories for Awesome-LLM-Red-Teaming
Users that are interested in Awesome-LLM-Red-Teaming are comparing it to the libraries listed below
Sorting:
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defenseβ164Updated 2 years ago
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β24Updated last year
- Codebase of https://arxiv.org/abs/2410.14923β49Updated 9 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.β48Updated 9 months ago
- This repository contains various attack against Large Language Models.β112Updated last year
- An AI-powered application that conducts structured interviews to create and maintain detailed personal profiles across various life aspecβ¦β45Updated 4 months ago
- Websites and tools for OSINT investigations pertaining to Israelβ23Updated 2 months ago
- Learn about a type of vulnerability that specifically targets machine learning modelsβ315Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacksβ76Updated 2 months ago
- β30Updated this week
- Manual Prompt Injection / Red Teaming Toolβ35Updated 10 months ago
- All things specific to LLM Red Teaming Generative AIβ28Updated 9 months ago
- Payloads for Attacking Large Language Modelsβ92Updated 2 months ago
- A collection of prompt injection mitigation techniques.β23Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β123Updated 7 months ago
- https://arxiv.org/abs/2412.02776β59Updated 8 months ago
- Prompt Injections Everywhereβ139Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β164Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementationβ114Updated last year
- This is the official repository for the code used in the paper: "What Was Your Prompt? A Remote Keylogging Attack on AI Assistants", USENβ¦β52Updated 6 months ago
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injectionβ261Updated 3 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.β65Updated last year
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applicationsβ203Updated last year
- π€ A GitHub action that leverages fabric patterns through an agent-based approachβ30Updated 7 months ago
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracleβ111Updated 2 years ago
- A LLM explicitly designed for getting hackedβ155Updated 2 years ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriβ¦β27Updated 7 months ago
- LLM | Security | Operations in one github repo with good links and pictures.β35Updated 7 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the victβ¦β42Updated 5 months ago
- A CLI wrapper for libmodsecurity (v3.0.10)β13Updated last year