AIS2Lab / MCPSecBenchLinks
MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols
☆27Updated 4 months ago
Alternatives and similar repositories for MCPSecBench
Users that are interested in MCPSecBench are comparing it to the libraries listed below
Sorting:
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆106Updated 2 weeks ago
- Agent Security Bench (ASB)☆174Updated 3 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆116Updated 3 months ago
- ☆29Updated last year
- ☆17Updated 6 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆94Updated last year
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆142Updated 2 weeks ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆381Updated 3 months ago
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆68Updated 5 months ago
- An autonomous LLM-agent for large-scale, repository-level code auditing☆322Updated last month
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆103Updated last year
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆163Updated 9 months ago
- ☆29Updated 10 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆111Updated last year
- ☆115Updated 4 months ago
- The automated prompt injection framework for LLM-integrated applications.☆248Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆52Updated 7 months ago
- ☆55Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆420Updated last month
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆56Updated 10 months ago
- ☆113Updated last month
- ☆112Updated last year
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆28Updated last year
- VulZoo: A Comprehensive Vulnerability Intelligence Dataset | ASE 2024 Demo☆66Updated 10 months ago
- ☆75Updated last year
- ☆25Updated last year
- PFI: Prompt Flow Integrity to Prevent Privilege Escalation in LLM Agents☆26Updated 10 months ago
- ☆168Updated last month
- ☆132Updated 6 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆564Updated last year