AIM-Intelligence / awesome-mcp-securityLinks
Security Threats related with MCP (Model Context Protocol), MCP Servers and more
☆42Updated 8 months ago
Alternatives and similar repositories for awesome-mcp-security
Users that are interested in awesome-mcp-security are comparing it to the libraries listed below
Sorting:
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆384Updated 3 weeks ago
- ☆182Updated 2 weeks ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆55Updated 2 years ago
- MCP-based Agent Deep Evaluation System☆139Updated 3 months ago
- Red-Teaming Language Models with DSPy☆248Updated 10 months ago
- DeepTeam is a framework to red team LLMs and LLM systems.☆1,196Updated this week
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆78Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- The fastest Trust Layer for AI Agents☆144Updated 6 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆52Updated 2 years ago
- ☆49Updated last year
- ☆34Updated last year
- Guardrails for secure and robust agent development☆374Updated 5 months ago
- LLM security and privacy☆52Updated last year
- autoredteam: code for training models that automatically red team other language models☆15Updated 2 years ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated 2 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆123Updated 2 months ago
- A security scanner for your LLM agentic workflows☆848Updated last month
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆365Updated last month
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆137Updated this week
- A benchmark for prompt injection detection systems.☆153Updated last week
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆100Updated last year
- Code for the paper "Defeating Prompt Injections by Design"☆186Updated 6 months ago
- Every practical and proposed defense against prompt injection.☆597Updated 10 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆600Updated 3 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆368Updated 11 months ago
- Papers about red teaming LLMs and Multimodal models.☆158Updated 6 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆176Updated 8 months ago
- ☆26Updated last year
- ☆79Updated 2 months ago