Azure / PyRITLinks
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆2,514Updated last week
Alternatives and similar repositories for PyRIT
Users that are interested in PyRIT are comparing it to the libraries listed below
Sorting:
- OWASP Foundation Web Respository☆751Updated this week
- the LLM vulnerability scanner☆4,485Updated last week
- The Security Toolkit for LLM Interactions☆1,716Updated last week
- Set of tools to assess and improve LLM security.☆3,407Updated this week
- Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders …☆533Updated 2 weeks ago
- Protection against Model Serialization Attacks☆492Updated last week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆386Updated last year
- A curated list of large language model tools for cybersecurity research.☆458Updated last year
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,156Updated this week
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,601Updated 7 months ago
- LLM Prompt Injection Detector☆1,283Updated 9 months ago
- OWASP Foundation Web Respository☆262Updated 2 weeks ago
- An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform☆950Updated 2 months ago
- Every practical and proposed defense against prompt injection.☆463Updated 3 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆483Updated 7 months ago
- New ways of breaking app-integrated LLMs☆1,930Updated last year
- Test Software for the Characterization of AI Technologies☆250Updated last week
- LLM powered fuzzing via OSS-Fuzz.☆1,209Updated this week
- a CLI that provides a generic automation layer for assessing the security of ML models☆860Updated last year
- An AI-powered threat modeling tool that leverages OpenAI's GPT models to generate threat models for a given application based on the STRI…☆733Updated this week
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆806Updated 2 weeks ago
- A unified evaluation framework for large language models☆2,616Updated last month
- Microsoft Threat Intelligence Security Tools☆1,865Updated last week
- A repo to conduct vulnerability enrichment.☆636Updated this week
- Zero shot vulnerability discovery using LLMs☆1,794Updated 3 months ago
- Navigate the CVE jungle with ease.☆2,005Updated last month
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆592Updated last week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,111Updated 2 months ago
- A framework for prompt tuning using Intent-based Prompt Calibration☆2,530Updated last month
- All things prompt engineering☆5,616Updated 11 months ago