bosch-aisecurity-aishield / watchtower
AIShield Watchtower: Dive Deep into AI's Secrets! π Open-source tool by AIShield for AI model insights & vulnerability scans. Secure your AI supply chain today! βοΈπ‘οΈ
β196Updated last week
Related projects β
Alternatives and complementary repositories for watchtower
- All things specific to LLM Red Teaming Generative AIβ14Updated 3 weeks ago
- Payloads for Attacking Large Language Modelsβ63Updated 4 months ago
- OWASP Machine Learning Security Top 10 Projectβ76Updated 2 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β240Updated last month
- Whistleblower is a tool for leaking system prompts and capability discovery of any API accessible LLM App. Built for developers, securityβ¦β111Updated 3 months ago
- Framework for LLM evaluation, guardrails and securityβ96Updated 2 months ago
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β15Updated 5 months ago
- CTF challenges designed and implemented in machine learning applicationsβ111Updated 2 months ago
- Secure Jupyter Notebooks and Experimentation Environmentβ55Updated 3 weeks ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β107Updated 8 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)β72Updated 5 months ago
- A benchmark for prompt injection detection systems.β86Updated 2 months ago
- β33Updated 3 months ago
- Dropbox LLM Security research code and resultsβ216Updated 5 months ago
- Explore AI Supply Chain Risk with the AI Risk Databaseβ50Updated 6 months ago
- Learn about a type of vulnerability that specifically targets machine learning modelsβ184Updated 4 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β231Updated 9 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilitiesβ25Updated 5 months ago
- β20Updated last month
- β22Updated 9 months ago
- A collection of awesome resources related AI securityβ124Updated 7 months ago
- LLM powered agents for scanning vulnerabilities on any website - Llama 3 8B, Groq, Selenium, CrewAI, Exa AIβ13Updated 3 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defensesβ142Updated 2 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β44Updated this week
- Adversarial Machine Learning (AML) Capture the Flag (CTF)β94Updated 7 months ago
- ATLAS tactics, techniques, and case studies dataβ49Updated last month
- Protection against Model Serialization Attacksβ314Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β149Updated last year
- β93Updated last month
- Project LLM Verification Standardβ36Updated 7 months ago