bosch-aisecurity-aishield / watchtower
AIShield Watchtower: Dive Deep into AI's Secrets! π Open-source tool by AIShield for AI model insights & vulnerability scans. Secure your AI supply chain today! βοΈπ‘οΈ
β191Updated last week
Alternatives and similar repositories for watchtower:
Users that are interested in watchtower are comparing it to the libraries listed below
- All things specific to LLM Red Teaming Generative AIβ24Updated 6 months ago
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β23Updated 11 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β314Updated 4 months ago
- AI-Powered CyberSecurity Compliance: Boost Network Security with OpenAI GPT-3.5-turboβ11Updated last year
- Learn about a type of vulnerability that specifically targets machine learning modelsβ260Updated 10 months ago
- This collection features a diverse array of GPT-based agents, each designed for unique and specialized interactions. From creative writinβ¦β42Updated last year
- Top 10 for Agentic AI (AI Agent Security) - Pre-release versionβ84Updated last month
- OWASP Machine Learning Security Top 10 Projectβ83Updated 2 months ago
- The fastest Trust Layer for AI Agentsβ130Updated last month
- Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposeβ¦β117Updated 8 months ago
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.β181Updated last week
- A collection of prompt injection mitigation techniques.β22Updated last year
- Payloads for Attacking Large Language Modelsβ81Updated 9 months ago
- π€ A GitHub action that leverages fabric patterns through an agent-based approachβ25Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β109Updated last year
- Codebase of https://arxiv.org/abs/2410.14923β46Updated 6 months ago
- A Fully Open Source DeepFake Detection Platformβ37Updated 8 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. β¦β47Updated last year
- A collection of awesome resources related AI securityβ206Updated last week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β162Updated last year
- π€― AI Security EXPOSED! Live Demos Showing Hidden Risks of π€ Agentic AI Flows: πPrompt Injection, β£οΈ Data Poisoning. Watch the recordedβ¦β19Updated 9 months ago
- Threat Intelligence Analysis workflows built with n8n and integrated in SecurityOnionβ14Updated 8 months ago
- A LLM explicitly designed for getting hackedβ147Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discoverβ¦β39Updated last year
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security prβ¦β45Updated 11 months ago
- β43Updated 5 months ago
- [SPOILER ALERT] Solutions to Gandalf, the prompt hacking/red teaming game from Lakera AIβ18Updated last year
- A curated list of awesome LLM Red Teaming training, resources, and tools.β16Updated last month
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ379Updated last year
- Explore AI Supply Chain Risk with the AI Risk Databaseβ53Updated 11 months ago