lucagioacchini / auto-pen-bench
This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking Generative Agents for Penetration Testing". It contains also the instructions to install, develop and test new vulnerable containers to include in the benchmark.
☆17Updated 4 months ago
Alternatives and similar repositories for auto-pen-bench:
Users that are interested in auto-pen-bench are comparing it to the libraries listed below
- ☆77Updated 2 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆62Updated 3 weeks ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.☆33Updated last year
- ☆34Updated 2 weeks ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆49Updated 2 weeks ago
- ☆52Updated 7 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆49Updated 2 months ago
- ☆34Updated 4 months ago
- Red-Teaming Language Models with DSPy☆168Updated last week
- A collection of prompt injection mitigation techniques.☆20Updated last year
- ☆64Updated last month
- ☆25Updated last year
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆22Updated 9 months ago
- CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.☆34Updated 2 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking cour…☆43Updated last month
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆46Updated 3 months ago
- Papers about red teaming LLMs and Multimodal models.☆96Updated 3 months ago
- ☆28Updated 5 months ago
- ☆19Updated last year
- ☆32Updated 7 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆58Updated 10 months ago
- ☆24Updated 4 months ago
- A framework-less approach to robust agent development.☆154Updated this week
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆262Updated 3 weeks ago
- Code to break Llama Guard☆31Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆450Updated 4 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆283Updated 4 months ago
- ☆83Updated 7 months ago