user1342 / Awesome-LLM-Red-TeamingLinks
A curated list of awesome LLM Red Teaming training, resources, and tools.
☆63Updated 3 months ago
Alternatives and similar repositories for Awesome-LLM-Red-Teaming
Users that are interested in Awesome-LLM-Red-Teaming are comparing it to the libraries listed below
Sorting:
- Prompt Injections Everywhere☆172Updated last year
- We present MAPTA, a multi-agent system for autonomous web application security assessment that combines large language model orchestratio…☆84Updated 4 months ago
- Payloads for Attacking Large Language Models☆114Updated 6 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆167Updated 2 years ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆35Updated last year
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆420Updated 7 months ago
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.☆28Updated last year
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆179Updated 2 years ago
- LLM | Security | Operations in one github repo with good links and pictures.☆81Updated last week
- Learn about a type of vulnerability that specifically targets machine learning models☆380Updated 3 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆152Updated last year
- A LLM explicitly designed for getting hacked☆165Updated 2 years ago
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆131Updated this week
- Penetration Testing AI Assistant based on open source LLMs.☆113Updated 8 months ago
- Open-source LLM Prompt-Injection and Jailbreaking Playground☆26Updated 5 months ago
- ☆101Updated last month
- AI agent for autonomous cyber operations☆451Updated 3 weeks ago
- Manual Prompt Injection / Red Teaming Tool☆50Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 7 months ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆45Updated 9 months ago
- Curated resources, research, and tools for securing AI systems☆288Updated 2 weeks ago
- AI / LLM Red Team Field Manual & Consultant’s Handbook☆216Updated this week
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆28Updated last year
- ☆64Updated 4 months ago
- ☆123Updated last week
- Automated red-team toolkit for stress-testing LLM defences - Vector Attacks on LLMs (Gendalf Case Study)☆107Updated 4 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆52Updated last year
- A Python-based tool that monitors dark web sources for mentions of specific organizations for Threat Monitoring.☆23Updated 8 months ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆55Updated 2 years ago