invariantlabs-ai / mcp-injection-experimentsLinks
Code snippets to reproduce MCP tool poisoning attacks.
β181Updated 5 months ago
Alternatives and similar repositories for mcp-injection-experiments
Users that are interested in mcp-injection-experiments are comparing it to the libraries listed below
Sorting:
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767β134Updated 5 months ago
- π₯π Awesome MCP (Model Context Protocol) Security π₯οΈβ564Updated 3 weeks ago
- β68Updated 2 months ago
- We present MAPTA, a multi-agent system for autonomous web application security assessment that combines large language model orchestratioβ¦β65Updated last month
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β140Updated 9 months ago
- LLM | Security | Operations in one github repo with good links and pictures.β57Updated 9 months ago
- XBOW Validation Benchmarksβ246Updated 3 months ago
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ102Updated last month
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming workβ139Updated 3 months ago
- Constrain, log and scan your MCP connections for security vulnerabilities.β1,128Updated this week
- β153Updated 3 months ago
- Code for the paper "Defeating Prompt Injections by Design"β125Updated 3 months ago
- MCP Server for Burpβ297Updated 3 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ92Updated 8 months ago
- A Model Context Protocol (MCP) server for querying the CVE-Search APIβ56Updated 2 months ago
- A collection of prompt injection mitigation techniques.β24Updated 2 years ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to poteβ¦β186Updated this week
- β76Updated this week
- LLM proxy to observe and debug what your AI agents are doing.β49Updated 2 months ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents onβ¦β70Updated 2 weeks ago
- Automated web vulnerability scanning with LLM agentsβ355Updated 3 months ago
- MCP for Security: A collection of Model Context Protocol servers for popular security tools like SQLMap, FFUF, NMAP, Masscan and more. Inβ¦β427Updated 3 weeks ago
- Payloads for Attacking Large Language Modelsβ101Updated 4 months ago
- β31Updated 2 months ago
- https://arxiv.org/abs/2412.02776β62Updated 10 months ago
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.β243Updated 5 months ago
- β370Updated 2 weeks ago
- Repo with random useful scripts, utilities, prompts and stuffβ171Updated 2 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β316Updated last year
- Guardrails for secure and robust agent developmentβ348Updated 2 months ago