emmanuelgjr / owaspllmtop10mappingLinks
This repository is dedicated to providing comprehensive mappings of the OWASP Top 10 vulnerabilities for Large Language Models (LLMs) to a variety of industry standards and cybersecurity frameworks.
โ16Updated last year
Alternatives and similar repositories for owaspllmtop10mapping
Users that are interested in owaspllmtop10mapping are comparing it to the libraries listed below
Sorting:
- ๐ค๐ก๏ธ๐๐๐ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.โ23Updated last year
- OWASP Machine Learning Security Top 10 Projectโ85Updated 4 months ago
- Secure Jupyter Notebooks and Experimentation Environmentโ75Updated 4 months ago
- โ48Updated last week
- All things specific to LLM Red Teaming Generative AIโ25Updated 7 months ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspireโฆโ56Updated last year
- Curated list of Open Source project focused on LLM securityโ43Updated 7 months ago
- Data Scientists Go To Jupyterโ64Updated 3 months ago
- ATLAS tactics, techniques, and case studies dataโ73Updated last month
- โ36Updated 5 months ago
- LLM Testing Findings Templatesโ72Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.โ163Updated last year
- InfoSec OpenAI Examplesโ19Updated last year
- A compilation of Software Supply Chain Security resources including initiatives, standards, regulations, organizations, vendors, tooling,โฆโ135Updated last year
- Payloads for Attacking Large Language Modelsโ89Updated 10 months ago
- using ML models for red teamingโ43Updated last year
- an extension for Burp Suite to allow researchers to utilize GPT for analys is of HTTP requests and responsesโ110Updated 2 years ago
- A fun POC that is built to understand AI security agents.โ30Updated 5 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilitiesโ30Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.โ63Updated 11 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)โ113Updated 5 months ago
- A LLM explicitly designed for getting hackedโ149Updated last year
- source code for the offsecml frameworkโ40Updated last year
- A Risk-Based Prioritization Taxonomy for prioritizing CVEs (Common Vulnerabilities and Exposures).โ75Updated last year
- CALDERA plugin for adversary emulation of AI-enabled systemsโ96Updated last year
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projectsโ80Updated 3 weeks ago
- GCP GOAT is the vulnerable application for learn the GCP Securityโ64Updated 2 weeks ago
- Tree of Attacks (TAP) Jailbreaking Implementationโ109Updated last year
- โ104Updated last year
- โ77Updated 3 weeks ago