protectai / rebuffLinks
LLM Prompt Injection Detector
β1,399Updated last year
Alternatives and similar repositories for rebuff
Users that are interested in rebuff are comparing it to the libraries listed below
Sorting:
- The Security Toolkit for LLM Interactionsβ2,444Updated last month
- π LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). π Extracts signals from prompts & responses, ensuring saβ¦β975Updated last year
- New ways of breaking app-integrated LLMsβ2,043Updated 6 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ438Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aβ¦β449Updated last year
- A tool for evaluating LLMsβ427Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ610Updated this week
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)β1,046Updated 2 weeks ago
- Every practical and proposed defense against prompt injection.β619Updated 11 months ago
- Protection against Model Serialization Attacksβ635Updated 2 months ago
- Adding guardrails to large language models.β6,297Updated this week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroβ¦β2,998Updated last year
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.β5,538Updated this week
- β780Updated 7 months ago
- π° PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.β728Updated this week
- Evaluation tool for LLM QA chainsβ1,091Updated 2 years ago
- Visualization and debugging tool for LangChain workflowsβ741Updated last year
- Scale LLM Engine public repositoryβ819Updated this week
- Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.β868Updated last year
- a security scanner for custom LLM applicationsβ1,096Updated last month
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β336Updated last year
- A tiny library for coding with large language models.β1,235Updated last year
- Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errorsβ693Updated 2 years ago
- Retrieval Augmented Generation (RAG) framework and context engine powered by Pineconeβ1,027Updated last year
- A curation of awesome tools, documents and projects about LLM Security.β1,513Updated 5 months ago
- An open-source visual programming environment for battle-testing prompts to LLMs.β2,912Updated 3 weeks ago
- β507Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.β302Updated 2 weeks ago
- Guardrails for secure and robust agent developmentβ383Updated 2 weeks ago
- Fiddler Auditor is a tool to evaluate language models.β188Updated last year