Azure / PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
☆2,462Updated this week
Alternatives and similar repositories for PyRIT:
Users that are interested in PyRIT are comparing it to the libraries listed below
- the LLM vulnerability scanner☆4,384Updated this week
- Set of tools to assess and improve LLM security.☆3,250Updated this week
- The Security Toolkit for LLM Interactions☆1,658Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆380Updated last year
- Protection against Model Serialization Attacks☆478Updated this week
- Microsoft Security Copilot is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders …☆530Updated 3 weeks ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆470Updated 6 months ago
- Test Software for the Characterization of AI Technologies☆247Updated this week
- OWASP Foundation Web Respository☆719Updated last week
- Every practical and proposed defense against prompt injection.☆453Updated 2 months ago
- LLM Prompt Injection Detector☆1,269Updated 9 months ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆858Updated last year
- An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform☆943Updated last month
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆572Updated last week
- A curated list of large language model tools for cybersecurity research.☆453Updated last year
- A unified evaluation framework for large language models☆2,606Updated last week
- ☆1,544Updated last year
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,150Updated 3 weeks ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,208Updated 3 weeks ago
- New ways of breaking app-integrated LLMs☆1,929Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆365Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆316Updated 4 months ago
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,593Updated 6 months ago
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆2,928Updated 2 weeks ago
- Cohere Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.☆3,042Updated last week
- A framework for prompt tuning using Intent-based Prompt Calibration☆2,499Updated last month
- A simple, performant and scalable Jax LLM!☆1,711Updated this week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆907Updated 5 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,321Updated 4 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆593Updated 3 months ago