Azure / PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆2,325Updated this week
Alternatives and similar repositories for PyRIT:
Users that are interested in PyRIT are comparing it to the libraries listed below
- the LLM vulnerability scanner☆4,150Updated this week
- OWASP Foundation Web Respository☆686Updated this week
- Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders …☆512Updated this week
- Set of tools to assess and improve LLM security.☆2,983Updated last month
- A curated list of large language model tools for cybersecurity research.☆436Updated 11 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆365Updated last year
- A unified evaluation framework for large language models☆2,574Updated last month
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,095Updated last month
- A curation of awesome tools, documents and projects about LLM Security.☆1,144Updated this week
- Every practical and proposed defense against prompt injection.☆405Updated last month
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,554Updated 5 months ago
- Test Software for the Characterization of AI Technologies☆242Updated this week
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆583Updated 2 months ago
- The Security Toolkit for LLM Interactions☆1,533Updated last week
- An overview of LLMs for cybersecurity.☆767Updated last week
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆2,680Updated this week
- Zero shot vulnerability discovery using LLMs☆1,581Updated last month
- Llama-3 agents that can browse the web by following instructions and talking to you☆1,393Updated 3 months ago
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆540Updated this week
- Dropbox LLM Security research code and results☆221Updated 10 months ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆849Updated last year
- Protection against Model Serialization Attacks☆437Updated this week
- [ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct☆2,003Updated 4 months ago
- An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform☆925Updated last week
- New ways of breaking app-integrated LLMs☆1,906Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆449Updated 5 months ago
- Navigate the CVE jungle with ease.☆1,916Updated 2 weeks ago
- Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.☆2,939Updated last week
- A curated list of GPT agents for cybersecurity☆5,931Updated 8 months ago
- ☆2,892Updated 6 months ago