pasquini-dario / LLMmap
☆34Updated 7 months ago
Alternatives and similar repositories for LLMmap
Users that are interested in LLMmap are comparing it to the libraries listed below
Sorting:
- General research for Dreadnode☆21Updated 10 months ago
- ☆62Updated 10 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆86Updated 11 months ago
- Source code of "TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification", ACL2024 (findings)☆11Updated 5 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆29Updated 4 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆46Updated 6 months ago
- using ML models for red teaming☆43Updated last year
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated 2 months ago
- ☆61Updated 5 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆107Updated last year
- ☆55Updated 4 months ago
- Agent Security Bench (ASB)☆79Updated last week
- A collection of prompt injection mitigation techniques.☆22Updated last year
- ☆44Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆79Updated 3 months ago
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆42Updated 2 weeks ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆109Updated 4 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆154Updated last month
- Papers about red teaming LLMs and Multimodal models.☆115Updated 5 months ago
- Adversarial Tokenization☆22Updated last week
- Fine-tuning base models to build robust task-specific models☆29Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆50Updated 8 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking cour…☆69Updated last month
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆92Updated 7 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆45Updated last month
- ☆16Updated 5 months ago
- ☆49Updated this week
- This repository provides a benchmark for prompt Injection attacks and defenses☆195Updated last week
- BurpSuite MCP Server: A powerful Model Context Protocol (MCP) server implementation for BurpSuite, providing programmatic access to Burp…☆27Updated 2 weeks ago