NetsecExplained / Attacking-and-Defending-Generative-AILinks
Reference notes for Attacking and Defending Generative AI presentation
☆67Updated last year
Alternatives and similar repositories for Attacking-and-Defending-Generative-AI
Users that are interested in Attacking-and-Defending-Generative-AI are comparing it to the libraries listed below
Sorting:
- LLM Testing Findings Templates☆74Updated last year
- source code for the offsecml framework☆42Updated last year
- ☆91Updated last week
- ☆43Updated 10 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆83Updated last week
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆142Updated 10 months ago
- An experimental project using LLM technology to generate security documentation for Open Source Software (OSS) projects☆34Updated 8 months ago
- A fun POC that is built to understand AI security agents.☆33Updated 10 months ago
- ☆38Updated 9 months ago
- Vulnerability impact analyzer that reduces false positives in SCA tools by performing intelligent code analysis. Uses agentic AI with ope…☆61Updated 8 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆94Updated 2 weeks ago
- ☆59Updated this week
- ☆320Updated last month
- Personal Access Token (PAT) recon tool for bug bounty hunters, pentesters & red teams☆33Updated 3 months ago
- A research project to add some brrrrrr to Burp☆194Updated 8 months ago
- Curated resources, research, and tools for securing AI systems☆156Updated this week
- Verizon Burp Extensions: AI Suite☆141Updated 6 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆73Updated last year
- NOVA: The Prompt Pattern Matching☆25Updated last week
- AI Security Shared Responsibility Model☆79Updated last month
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆78Updated 5 months ago
- Payloads for AI Red Teaming and beyond☆296Updated 2 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- ☆62Updated 4 months ago
- A LLM explicitly designed for getting hacked☆162Updated 2 years ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆103Updated 2 years ago
- AI agent for autonomous cyber operations☆319Updated this week
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆32Updated last year
- ☆17Updated 6 months ago
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆253Updated last month