kenhuangus / Top-Threats-for-AI-Agents
☆28Updated this week
Alternatives and similar repositories for Top-Threats-for-AI-Agents:
Users that are interested in Top-Threats-for-AI-Agents are comparing it to the libraries listed below
- ☆36Updated 2 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆42Updated last year
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆45Updated this week
- OWASP Top 10 for Agentic AI (AI Agent Security) - Pre-release version☆55Updated last week
- Agentic Workflows Made Simple☆105Updated last week
- ATLAS tactics, techniques, and case studies data☆57Updated 5 months ago
- ☆37Updated 2 months ago
- StartLeft is an automation tool for generating Threat Models written in the Open Threat Model (OTM) format from a variety of different so…☆49Updated this week
- ☆102Updated 9 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security pr…☆43Updated 9 months ago
- Project LLM Verification Standard☆40Updated 10 months ago
- Save toil in security operations with: Detection & Intelligence Analysis for New Alerts (D.I.A.N.A. )☆171Updated 5 months ago
- Generative AI Governance for Enterprises☆14Updated 2 months ago
- An AI-powered tool for discovering privilege escalation opportunities in AWS IAM configurations.☆106Updated 5 months ago
- ☆221Updated last month
- Test Software for the Characterization of AI Technologies☆240Updated this week
- One Conference 2024☆106Updated 5 months ago
- ☆64Updated 3 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆88Updated 2 months ago
- AI-powered tool designed to help producing Threat Intelligence Mindmap.☆85Updated last month
- CALDERA plugin for adversary emulation of AI-enabled systems☆93Updated last year
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆16Updated last month
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆64Updated 2 months ago
- ☆32Updated 3 months ago
- Curated list of Open Source project focused on LLM security☆33Updated 3 months ago
- OWASP Foundation Web Respository☆237Updated this week
- AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications …☆26Updated 2 months ago
- source code for the offsecml framework☆37Updated 8 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆352Updated last year
- Rapidly identify and mitigate container security vulnerabilities with generative AI.☆87Updated this week