zmre / awesome-security-for-aiLinks
Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.
☆76Updated last year
Alternatives and similar repositories for awesome-security-for-ai
Users that are interested in awesome-security-for-ai are comparing it to the libraries listed below
Sorting:
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- source code for the offsecml framework☆43Updated last year
- A LLM explicitly designed for getting hacked☆163Updated 2 years ago
- LLM Testing Findings Templates☆75Updated last year
- ☆277Updated 3 months ago
- Reference notes for Attacking and Defending Generative AI presentation☆67Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆90Updated this week
- ☆38Updated 11 months ago
- Payloads for Attacking Large Language Models☆109Updated 6 months ago
- ☆110Updated last week
- ☆343Updated 2 months ago
- ☆42Updated 11 months ago
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- Dropbox LLM Security research code and results☆248Updated last year
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆260Updated 2 months ago
- ☆64Updated last week
- LLM | Security | Operations in one github repo with good links and pictures.☆67Updated 11 months ago
- ☆52Updated last year
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆33Updated 11 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆404Updated 4 months ago
- ☆55Updated 7 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆175Updated 2 years ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆98Updated last month
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 6 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆150Updated 11 months ago
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆32Updated last year
- NOVA: The Prompt Pattern Matching☆56Updated last month
- ATHI — An AI Threat Modeling Framework for Policymakers☆58Updated 2 years ago
- AI Security Shared Responsibility Model☆85Updated 2 months ago
- OWASP Machine Learning Security Top 10 Project☆94Updated last week