pasquini-dario / LLMmapLinks
☆51Updated 2 weeks ago
Alternatives and similar repositories for LLMmap
Users that are interested in LLMmap are comparing it to the libraries listed below
Sorting:
- ☆82Updated 8 months ago
- ☆65Updated 6 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆90Updated last year
- General research for Dreadnode☆23Updated last year
- using ML models for red teaming☆43Updated last year
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆73Updated 2 weeks ago
- LLM | Security | Operations in one github repo with good links and pictures.☆35Updated 7 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆114Updated last year
- ☆70Updated last year
- Code snippets to reproduce MCP tool poisoning attacks.☆164Updated 3 months ago
- VulZoo: A Comprehensive Vulnerability Intelligence Dataset | ASE 2024 Demo☆57Updated 4 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆123Updated 7 months ago
- All things specific to LLM Red Teaming Generative AI☆28Updated 9 months ago
- https://arxiv.org/abs/2412.02776☆59Updated 8 months ago
- A collection of prompt injection mitigation techniques.☆23Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆113Updated last year
- CyberBench: A Multi-Task Cyber LLM Benchmark☆17Updated 3 months ago
- Source code of "TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification", ACL2024 (findings)☆12Updated 8 months ago
- Autonomous Assumed Breach Penetration-Testing Active Directory Networks☆20Updated last month
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆59Updated this week
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆86Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆303Updated last year
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆89Updated last week
- Risks and targets for assessing LLMs & LLM vulnerabilities☆32Updated last year
- ☆28Updated this week
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆49Updated last week
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆42Updated 5 months ago
- A benchmark for prompt injection detection systems.☆124Updated 3 weeks ago
- The automated prompt injection framework for LLM-integrated applications.☆220Updated 10 months ago
- Testability Pattern Catalogs for SAST☆31Updated 5 months ago