protectai / llm-guard
The Security Toolkit for LLM Interactions
☆1,349Updated this week
Alternatives and similar repositories for llm-guard:
Users that are interested in llm-guard are comparing it to the libraries listed below
- LLM Prompt Injection Detector☆1,159Updated 5 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆338Updated 11 months ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆866Updated last month
- OWASP Foundation Web Respository☆621Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,294Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆429Updated 3 months ago
- Protection against Model Serialization Attacks☆355Updated this week
- The production toolkit for LLMs. Observability, prompt management and evaluations.☆1,136Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆2,075Updated this week
- ☆815Updated 2 months ago
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality