protectai / rebuff
LLM Prompt Injection Detector
☆1,224Updated 7 months ago
Alternatives and similar repositories for rebuff:
Users that are interested in rebuff are comparing it to the libraries listed below
- The Security Toolkit for LLM Interactions☆1,550Updated last week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆895Updated 4 months ago
- A tool for evaluating LLMs☆410Updated 10 months ago
- New ways of breaking app-integrated LLMs☆1,909Updated last year
- OWASP Foundation Web Respository☆693Updated this week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆352Updated last year
- Adding guardrails to large language models.☆4,713Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆453Updated 5 months ago
- The production toolkit for LLMs. Observability, prompt management and evaluations.☆1,249Updated last week
- ⛓️ Serving LangChain LLM apps and agents automagically with FastApi. LLMops☆924Updated 8 months ago
- Protection against Model Serialization Attacks☆437Updated this week
- Every practical and proposed defense against prompt injection.☆413Updated last month
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆4,591Updated this week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆2,825Updated 7 months ago
- Dropbox LLM Security research code and results☆221Updated 10 months ago
- 🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.☆571Updated last week
- ☆449Updated last year
- ☆764Updated last year
- ☆869Updated 3 months ago
- LangSmith Client SDK Implementations☆513Updated this week
- The web framework for building LLM microservices☆988Updated 8 months ago
- Open-source tool to visualise your RAG 🔮☆1,119Updated 3 months ago
- ☆498Updated 7 months ago
- Common interface for interacting with AI agents. The protocol is tech stack agnostic - you can use it with any framework for building age…☆1,158Updated 2 months ago
- Evaluation tool for LLM QA chains☆1,073Updated last year
- Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Ge…☆6,008Updated this week
- Promptimize is a prompt engineering evaluation and testing toolkit.☆456Updated last month
- Build robust LLM applications with true composability 🔗☆415Updated last year
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,507Updated last week
- Python SDK for running evaluations on LLM generated responses☆274Updated last week