Azure / PyRITLinks
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆3,123Updated this week
Alternatives and similar repositories for PyRIT
Users that are interested in PyRIT are comparing it to the libraries listed below
Sorting:
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆952Updated this week
- the LLM vulnerability scanner☆6,397Updated this week
- Set of tools to assess and improve LLM security.☆3,871Updated last week
- The Security Toolkit for LLM Interactions☆2,261Updated 2 weeks ago
- Protection against Model Serialization Attacks☆601Updated last month
- Every practical and proposed defense against prompt injection.☆579Updated 9 months ago
- LLM Prompt Injection Detector☆1,375Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆430Updated last year
- a CLI that provides a generic automation layer for assessing the security of ML models☆892Updated 4 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆587Updated last month
- Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders …☆584Updated 2 months ago
- New ways of breaking app-integrated LLMs☆2,007Updated 4 months ago
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,675Updated last year
- A curated list of large language model tools for cybersecurity research.☆478Updated last year
- An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform☆1,074Updated 2 weeks ago
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆1,727Updated last month
- Test Software for the Characterization of AI Technologies☆264Updated this week
- A unified evaluation framework for large language models☆2,743Updated last month
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,192Updated last week
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆865Updated 2 months ago
- OWASP Foundation Web Respository☆330Updated this week
- a security scanner for custom LLM applications☆1,031Updated last month
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆867Updated 4 months ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,450Updated 3 months ago
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,686Updated last week
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆782Updated last year
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆959Updated 11 months ago
- Zero shot vulnerability discovery using LLMs☆2,389Updated 9 months ago
- An AI-powered threat modeling tool that leverages OpenAI's GPT models to generate threat models for a given application based on the STRI…☆871Updated this week
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆632Updated 3 months ago