The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆3,556Mar 16, 2026Updated this week
Alternatives and similar repositories for PyRIT
Users that are interested in PyRIT are comparing it to the libraries listed below
Sorting:
- the LLM vulnerability scanner☆7,251Mar 12, 2026Updated last week
- a CLI that provides a generic automation layer for assessing the security of ML models☆914Jul 18, 2025Updated 8 months ago
- Set of tools to assess and improve LLM security.☆4,077Updated this week
- Integrate PyRIT in existing tools☆59Feb 23, 2026Updated 3 weeks ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆642Feb 16, 2026Updated last month
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆1,870Feb 13, 2026Updated last month
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,159Mar 13, 2026Updated last week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆465Jan 31, 2024Updated 2 years ago
- The Security Toolkit for LLM Interactions☆2,699Dec 15, 2025Updated 3 months ago
- An offensive/defense security toolset for discovery, recon and ethical assessment of AI Agents☆1,133Dec 21, 2025Updated 2 months ago
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,699Oct 23, 2024Updated last year
- Protection against Model Serialization Attacks☆657Feb 18, 2026Updated last month
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,152Feb 22, 2026Updated 3 weeks ago
- LLM Prompt Injection Detector☆1,445Aug 7, 2024Updated last year
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,819Updated this week
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and…☆5,893Dec 12, 2025Updated 3 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,808Feb 3, 2026Updated last month
- A security scanner for your LLM agentic workflows☆929Nov 27, 2025Updated 3 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆118Feb 7, 2024Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- Automated Penetration Testing Agentic Framework Powered by Large Language Models☆12,102Feb 23, 2026Updated 3 weeks ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆159Dec 18, 2024Updated last year
- Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Ll…☆17,709Updated this week
- A collection of Azure AD/Entra tools for offensive and defensive security purposes☆2,542Feb 5, 2026Updated last month
- Zero shot vulnerability discovery using LLMs☆2,586Feb 6, 2025Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- New ways of breaking app-integrated LLMs☆2,063Jul 17, 2025Updated 8 months ago
- Automating situational awareness for cloud penetration tests.☆2,309Mar 10, 2026Updated last week
- Test Software for the Characterization of AI Technologies☆283Mar 13, 2026Updated last week
- Every practical and proposed defense against prompt injection.☆659Feb 22, 2025Updated last year
- A research project to add some brrrrrr to Burp☆208Feb 16, 2026Updated last month
- Azure Red Team tool for graphing Azure and Azure Active Directory objects☆1,689Jan 8, 2024Updated 2 years ago
- a security scanner for custom LLM applications☆1,149Dec 1, 2025Updated 3 months ago
- Granular, Actionable Adversary Emulation for the Cloud☆2,277Mar 12, 2026Updated last week
- Small and highly portable detection tests based on MITRE's ATT&CK.☆11,688Mar 13, 2026Updated last week
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is de…☆818Mar 6, 2026Updated 2 weeks ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆347Feb 12, 2024Updated 2 years ago