NetsecExplained / Attacking-and-Defending-Generative-AILinks
Reference notes for Attacking and Defending Generative AI presentation
☆69Updated last year
Alternatives and similar repositories for Attacking-and-Defending-Generative-AI
Users that are interested in Attacking-and-Defending-Generative-AI are comparing it to the libraries listed below
Sorting:
- LLM Testing Findings Templates☆75Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆92Updated last week
- ☆137Updated this week
- ☆44Updated last year
- A fun POC that is built to understand AI security agents.☆34Updated 3 months ago
- source code for the offsecml framework☆44Updated last year
- An experimental project using LLM technology to generate security documentation for Open Source Software (OSS) projects☆34Updated 11 months ago
- Vulnerability impact analyzer that reduces false positives in SCA tools by performing intelligent code analysis. Uses agentic AI with ope…☆62Updated 11 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆155Updated last year
- ☆38Updated last year
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆102Updated 3 months ago
- ☆82Updated last month
- Integrate PyRIT in existing tools☆46Updated 11 months ago
- A collection of servers which are deliberately vulnerable to learn Pentesting MCP Servers.☆217Updated last month
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆83Updated 9 months ago
- Repository for CoSAI Workstream 4, Secure Design Patterns for Agentic Systems☆82Updated 2 weeks ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆109Updated 2 years ago
- Damn Vulnerable Browser Extension (DVBE), previously named as Badly Coded Browser Extension (BCBE), is an open-source vulnerable Chrome E…☆31Updated 11 months ago
- GCP GOAT is the vulnerable application for learn the GCP Security☆70Updated 8 months ago
- AI Security Shared Responsibility Model☆88Updated 4 months ago
- The notebook for my talk - ChatGPT: Your Red Teaming Ally☆53Updated 2 years ago
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆32Updated last year
- AI featured threat modeling and security review project☆17Updated last year
- A research project to add some brrrrrr to Burp☆197Updated 11 months ago
- An example vulnerable app that integrates an LLM☆26Updated last year
- NOVA: The Prompt Pattern Matching☆88Updated last week
- RansomWhen is a tool to enumerate identities that can lock S3 Buckets using KMS, resulting in ransomwares, as well as detect occurances o…☆60Updated 11 months ago
- Verizon Burp Extensions: AI Suite☆142Updated 9 months ago
- One Conference 2024☆111Updated last year
- A LLM explicitly designed for getting hacked☆167Updated 2 years ago