phreakAI / metasploit-gymLinks
An environment for testing AI agents against networks using Metasploit.
☆43Updated 2 years ago
Alternatives and similar repositories for metasploit-gym
Users that are interested in metasploit-gym are comparing it to the libraries listed below
Sorting:
- Adversarial Machine Learning (AML) Capture the Flag (CTF)☆102Updated last year
- Data Scientists Go To Jupyter☆63Updated 4 months ago
- ☆138Updated 2 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆111Updated last year
- ChainReactor is a research project that leverages AI planning to discover exploitation chains for privilege escalation on Unix systems. T…☆48Updated 8 months ago
- ATLAS tactics, techniques, and case studies data☆76Updated 2 months ago
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responses☆111Updated 2 years ago
- Code for shelLM tool☆55Updated 5 months ago
- source code for the offsecml framework☆41Updated last year
- ☆65Updated 5 months ago
- A curated list of large language model tools for cybersecurity research.☆465Updated last year
- https://arxiv.org/abs/2412.02776☆59Updated 7 months ago
- Payloads for Attacking Large Language Models☆91Updated last month
- ☆41Updated this week
- ☆105Updated last year
- ☆22Updated 2 years ago
- Lightweight LLM Interaction Framework☆296Updated this week
- ☆254Updated 6 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆75Updated 2 months ago
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆203Updated last year
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆136Updated last year
- ☆12Updated 2 years ago
- Code repository for "Machine Learning For Red Team Hackers".☆37Updated 5 years ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.☆35Updated last year
- ☆41Updated 7 months ago
- A LLM explicitly designed for getting hacked☆153Updated last year
- An environment for testing AI pentesting agents against a simulated network.☆189Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆122Updated 6 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- using ML models for red teaming☆43Updated last year