ipa-lab / hackingBuddyGPT
Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
β537Updated this week
Alternatives and similar repositories for hackingBuddyGPT:
Users that are interested in hackingBuddyGPT are comparing it to the libraries listed below
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ449Updated 5 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β269Updated last year
- A curated list of large language model tools for cybersecurity research.β436Updated 11 months ago
- Agentic LLM Vulnerability Scanner / AI red teaming kit π§ͺβ1,193Updated this week
- Automated web vulnerability scanning with LLM agentsβ264Updated 2 weeks ago
- some prompt about cyber securityβ190Updated last year
- Learn about a type of vulnerability that specifically targets machine learning modelsβ233Updated 9 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ363Updated last year
- Protection against Model Serialization Attacksβ431Updated this week
- Zero shot vulnerability discovery using LLMsβ1,581Updated last month
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β297Updated 3 months ago
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jaiβ¦β439Updated last week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β274Updated 7 months ago
- a prompt injection scanner for custom LLM applicationsβ758Updated 2 weeks ago
- An overview of LLMs for cybersecurity.β751Updated this week
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β55Updated 4 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β583Updated 2 months ago
- AI-Powered Penetration Testing Assistantβ973Updated this week
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responsesβ102Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β160Updated last year
- A collection of awesome resources related AI securityβ190Updated last month
- This repository contains various attack against Large Language Models.β96Updated 10 months ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engβ¦β2,310Updated this week
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITREβ¦β1,095Updated last month
- Prompt Injections Everywhereβ110Updated 7 months ago
- Prompt Injection Primer for Engineersβ423Updated last year
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defenseβ144Updated last year
- Dropbox LLM Security research code and resultsβ221Updated 10 months ago
- A comprehensive local Linux Privilege-Escalation Benchmarkβ29Updated 3 months ago
- Every practical and proposed defense against prompt injection.β405Updated last month