tenable / awesome-llm-cybersecurity-tools
A curated list of large language model tools for cybersecurity research.
β451Updated last year
Alternatives and similar repositories for awesome-llm-cybersecurity-tools:
Users that are interested in awesome-llm-cybersecurity-tools are comparing it to the libraries listed below
- A collection of awesome resources related AI securityβ217Updated this week
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responsesβ108Updated 2 years ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β274Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β592Updated 3 months ago
- β242Updated 3 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ380Updated last year
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β283Updated 8 months ago
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITREβ¦β1,147Updated 3 weeks ago
- OWASP Foundation Web Respositoryβ254Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ470Updated 6 months ago
- some prompt about cyber securityβ204Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β161Updated last year
- Dropbox LLM Security research code and resultsβ224Updated 11 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β315Updated 4 months ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.β108Updated 11 months ago
- Prompt Injection Primer for Engineersβ430Updated last year
- Learn about a type of vulnerability that specifically targets machine learning modelsβ265Updated 10 months ago
- Every practical and proposed defense against prompt injection.β439Updated 2 months ago
- Test Software for the Characterization of AI Technologiesβ246Updated this week
- β367Updated last year
- Payloads for Attacking Large Language Modelsβ81Updated 9 months ago
- Galah: An LLM-powered web honeypot.β535Updated 2 weeks ago
- OWASP Foundation Web Respositoryβ711Updated this week
- β275Updated last year
- Protection against Model Serialization Attacksβ474Updated this week
- A LLM explicitly designed for getting hackedβ148Updated last year
- A python module for working with ATT&CKβ542Updated 2 weeks ago
- An ever-growing list of resources for data-driven vulnerability assessment and prioritizationβ124Updated 2 years ago
- All things specific to LLM Red Teaming Generative AIβ24Updated 6 months ago
- CALDERA plugin for adversary emulation of AI-enabled systemsβ95Updated last year