hupe1980 / aisploitLinks
π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
β28Updated last year
Alternatives and similar repositories for aisploit
Users that are interested in aisploit are comparing it to the libraries listed below
Sorting:
- LLM | Security | Operations in one github repo with good links and pictures.β86Updated 2 weeks ago
- All things specific to LLM Red Teaming Generative AIβ29Updated last year
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.β243Updated last year
- https://arxiv.org/abs/2412.02776β67Updated last year
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β336Updated last year
- A collection of prompt injection mitigation techniques.β26Updated 2 years ago
- All about llm-agents security,attack,vulnerabilities and how to do them for cybersecurity.β40Updated last month
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracleβ109Updated 2 years ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriβ¦β35Updated last year
- Payloads for Attacking Large Language Modelsβ118Updated 2 weeks ago
- β168Updated last month
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β66Updated 7 months ago
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming workβ164Updated 3 months ago
- LLM security and privacyβ53Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ439Updated last year
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767β163Updated 9 months ago
- A very simple open source implementation of Google's Project Naptimeβ183Updated 10 months ago
- β24Updated 2 years ago
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocolsβ27Updated 4 months ago
- Penetration Testing AI Assistant based on open source LLMs.β115Updated 9 months ago
- Learn about a type of vulnerability that specifically targets machine learning modelsβ398Updated 4 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β166Updated 2 years ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacksβ93Updated 8 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking courβ¦β120Updated 9 months ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.β37Updated 2 years ago
- Secure Jupyter Notebooks and Experimentation Environmentβ84Updated 11 months ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents onβ¦β106Updated 2 weeks ago
- β132Updated 6 months ago
- Code snippets to reproduce MCP tool poisoning attacks.β191Updated 9 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ116Updated 3 months ago