Azure / PyRITLinks
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆2,901Updated this week
Alternatives and similar repositories for PyRIT
Users that are interested in PyRIT are comparing it to the libraries listed below
Sorting:
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆898Updated this week
- the LLM vulnerability scanner☆5,873Updated this week
- Protection against Model Serialization Attacks☆571Updated this week
- The Security Toolkit for LLM Interactions☆2,074Updated this week
- Set of tools to assess and improve LLM security.☆3,762Updated last week
- LLM Prompt Injection Detector☆1,352Updated last year
- Every practical and proposed defense against prompt injection.☆546Updated 6 months ago
- Test Software for the Characterization of AI Technologies☆261Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆563Updated last month
- A curated list of large language model tools for cybersecurity research.☆474Updated last year
- a CLI that provides a generic automation layer for assessing the security of ML models☆881Updated 2 months ago
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,659Updated 10 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆414Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆371Updated last month
- New ways of breaking app-integrated LLMs☆1,992Updated 2 months ago
- OWASP Foundation Web Respository☆311Updated last week
- a security scanner for custom LLM applications☆965Updated this week
- A curation of awesome tools, documents and projects about LLM Security.☆1,392Updated last month
- Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders …☆571Updated 2 weeks ago
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆270Updated 2 weeks ago
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆1,574Updated last month
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,186Updated 3 months ago
- Prompt Injection Primer for Engineers☆460Updated 2 years ago
- An AI-powered threat modeling tool that leverages OpenAI's GPT models to generate threat models for a given application based on the STRI…☆840Updated last week
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆758Updated 2 months ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,074Updated this week
- An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform☆1,052Updated last month
- This repository is dedicated to providing comprehensive mappings of the OWASP Top 10 vulnerabilities for Large Language Models (LLMs) to …☆24Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,423Updated 6 months ago
- A collection of awesome resources related AI security☆311Updated this week