ipa-lab / hackingBuddyGPTLinks
Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
β612Updated 2 weeks ago
Alternatives and similar repositories for hackingBuddyGPT
Users that are interested in hackingBuddyGPT are comparing it to the libraries listed below
Sorting:
- A curated list of large language model tools for cybersecurity research.β460Updated last year
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β282Updated last year
- some prompt about cyber securityβ217Updated last year
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β289Updated 10 months ago
- A collection of awesome resources related AI securityβ248Updated this week
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ394Updated last year
- Penetration Testing AI Assistant based on open source LLMs.β83Updated 2 months ago
- a prompt injection scanner for custom LLM applicationsβ819Updated 3 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ504Updated last week
- Dropbox LLM Security research code and resultsβ227Updated last year
- Protection against Model Serialization Attacksβ507Updated this week
- Prompt Injection Primer for Engineersβ442Updated last year
- Uses ChatGPT API, Bard API, and Llama2, Python-Nmap, DNS Recon, PCAP and JWT recon modules and uses the GPT3 model to create vulnerabilitβ¦β562Updated 7 months ago
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jaiβ¦β615Updated 2 weeks ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β603Updated 3 weeks ago
- Automated web vulnerability scanning with LLM agentsβ328Updated this week
- Zero shot vulnerability discovery using LLMsβ1,818Updated 4 months ago
- Learn about a type of vulnerability that specifically targets machine learning modelsβ302Updated last year
- Using Agents To Automate Pentestingβ278Updated 5 months ago
- An overview of LLMs for cybersecurity.β940Updated last month
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β163Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β324Updated 6 months ago
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responsesβ111Updated 2 years ago
- LLM Powered Pentesting for your softwareβ131Updated last week
- β294Updated this week
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Geminiβ171Updated 2 months ago
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITREβ¦β1,161Updated 3 weeks ago
- Train LLMs on private data. Simply make an API request to our training endpoint specifying you data and model. LangDrive will handle the β¦β160Updated 10 months ago
- AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code β¦β309Updated 7 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β60Updated 6 months ago