hupe1980 / aisploitLinks
π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
β28Updated last year
Alternatives and similar repositories for aisploit
Users that are interested in aisploit are comparing it to the libraries listed below
Sorting:
- LLM | Security | Operations in one github repo with good links and pictures.β86Updated last week
- A collection of prompt injection mitigation techniques.β26Updated 2 years ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β66Updated 6 months ago
- All things specific to LLM Red Teaming Generative AIβ29Updated last year
- Payloads for Attacking Large Language Modelsβ116Updated 7 months ago
- All about llm-agents security,attack,vulnerabilities and how to do them for cybersecurity.β39Updated last week
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriβ¦β35Updated last year
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracleβ109Updated 2 years ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β333Updated last year
- https://arxiv.org/abs/2412.02776β67Updated last year
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents onβ¦β101Updated 3 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking courβ¦β120Updated 8 months ago
- β109Updated 5 months ago
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocolsβ24Updated 3 months ago
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming workβ161Updated 3 months ago
- The fastest Trust Layer for AI Agentsβ145Updated 7 months ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.β233Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ436Updated last year
- A curated list of awesome LLM Red Teaming training, resources, and tools.β65Updated 4 months ago
- some prompt about cyber securityβ288Updated 2 years ago
- An implementation of a Model Context Protocol (MCP) for the Nuclei scanner. This tool enables context-aware vulnerability scanning by intβ¦β36Updated 5 months ago
- Code snippets to reproduce MCP tool poisoning attacks.β188Updated 8 months ago
- CyberBench: A Multi-Task Cyber LLM Benchmarkβ28Updated 8 months ago
- β24Updated 2 years ago
- Learn about a type of vulnerability that specifically targets machine learning modelsβ395Updated 3 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β167Updated 2 years ago
- Curated resources, research, and tools for securing AI systemsβ296Updated this week
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ132Updated last week
- β66Updated 3 months ago
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767β159Updated 8 months ago