Azure / PyRITLinks
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆2,742Updated last week
Alternatives and similar repositories for PyRIT
Users that are interested in PyRIT are comparing it to the libraries listed below
Sorting:
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆839Updated last week
- Set of tools to assess and improve LLM security.☆3,659Updated this week
- The Security Toolkit for LLM Interactions☆1,936Updated last week
- the LLM vulnerability scanner☆4,896Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆530Updated last week
- Protection against Model Serialization Attacks☆540Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆402Updated last year
- LLM Prompt Injection Detector☆1,326Updated last year
- A curated list of large language model tools for cybersecurity research.☆468Updated last year
- Every practical and proposed defense against prompt injection.☆511Updated 5 months ago
- Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders …☆555Updated 2 months ago
- Test Software for the Characterization of AI Technologies☆260Updated this week
- An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform☆974Updated this week
- New ways of breaking app-integrated LLMs☆1,969Updated 3 weeks ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆874Updated 3 weeks ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆347Updated last week
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,174Updated 2 months ago
- A unified evaluation framework for large language models☆2,679Updated this week
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆735Updated last month
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆262Updated this week
- An overview of LLMs for cybersecurity.☆1,000Updated 3 months ago
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆672Updated 3 weeks ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,316Updated 3 months ago
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,644Updated 9 months ago
- OWASP Foundation Web Respository☆289Updated last week
- This repository is dedicated to providing comprehensive mappings of the OWASP Top 10 vulnerabilities for Large Language Models (LLMs) to …☆23Updated last year
- A collection of awesome resources related AI security☆278Updated last week
- A benchmark for prompt injection detection systems.☆124Updated 3 weeks ago
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆1,480Updated 2 weeks ago
- Zero shot vulnerability discovery using LLMs☆2,160Updated 6 months ago