ipa-lab / hackingBuddyGPT
Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
β492Updated this week
Alternatives and similar repositories for hackingBuddyGPT:
Users that are interested in hackingBuddyGPT are comparing it to the libraries listed below
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ429Updated 3 months ago
- A curated list of large language model tools for cybersecurity research.β414Updated 9 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β247Updated 11 months ago
- Agentic LLM Vulnerability Scanner / AI red teaming kitβ925Updated this week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β276Updated last month
- Train LLMs on private data. Simply make an API request to our training endpoint specifying you data and model. LangDrive will handle the β¦β143Updated 5 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ339Updated 11 months ago
- Protection against Model Serialization Attacksβ361Updated this week
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jaiβ¦β230Updated this week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β261Updated 4 months ago
- automatically tests prompt injection attacks on ChatGPT instancesβ681Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β569Updated this week
- Learn about a type of vulnerability that specifically targets machine learning modelsβ210Updated 6 months ago
- some prompt about cyber securityβ168Updated last year
- Every practical and proposed defense against prompt injection.β372Updated 7 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defenseβ132Updated last year
- Zero shot vulnerability discovery using LLMsβ1,340Updated 2 months ago
- An overview of LLMs for cybersecurity.β567Updated last week
- OWASP Foundation Web Respositoryβ621Updated this week
- Using Agents To Automate Pentestingβ183Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β154Updated last year
- This repository contains various attack against Large Language Models.β86Updated 7 months ago
- Test Software for the Characterization of AI Technologiesβ235Updated this week
- CodeGate: CodeGen Privacy and Securityβ226Updated this week
- Dropbox LLM Security research code and resultsβ219Updated 7 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security prβ¦β43Updated 7 months ago
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITREβ¦β1,037Updated last month
- LLM Powered Pentesting for your softwareβ50Updated 9 months ago
- Code release for Best-of-N Jailbreakingβ411Updated 3 weeks ago
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Geminiβ154Updated 10 months ago