LLM Prompt Injection Detector
☆1,445Aug 7, 2024Updated last year
Alternatives and similar repositories for rebuff
Users that are interested in rebuff are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Security Toolkit for LLM Interactions☆2,699Dec 15, 2025Updated 3 months ago
- Adding guardrails to large language models.☆6,553Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆465Jan 31, 2024Updated 2 years ago
- the LLM vulnerability scanner☆7,312Updated this week
- Protection against Model Serialization Attacks☆657Feb 18, 2026Updated last month
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,819Updated this week
- Secure Jupyter Notebooks and Experimentation Environment☆86Feb 6, 2025Updated last year
- a security scanner for custom LLM applications☆1,149Dec 1, 2025Updated 3 months ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆977Nov 22, 2024Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆347Feb 12, 2024Updated 2 years ago
- Every practical and proposed defense against prompt injection.☆659Feb 22, 2025Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,548Aug 20, 2025Updated 7 months ago
- New ways of breaking app-integrated LLMs☆2,063Jul 17, 2025Updated 8 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,152Feb 22, 2026Updated last month
- A language for constraint-guided and efficient LLM programming.☆4,161May 22, 2025Updated 10 months ago
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Ll…☆17,709Updated this week
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆409Oct 29, 2025Updated 4 months ago
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.☆7,964Jul 11, 2025Updated 8 months ago
- Set of tools to assess and improve LLM security.☆4,077Updated this week
- A collection of prompt injection mitigation techniques.☆28Aug 19, 2023Updated 2 years ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- A guidance language for controlling large language models.☆21,356Updated this week
- Prompt Injection Primer for Engineers☆577Aug 25, 2023Updated 2 years ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,556Mar 16, 2026Updated last week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆3,025Feb 11, 2026Updated last month
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,184Updated this week
- Self-hardening firewall for large language models☆268Feb 28, 2024Updated 2 years ago
- Supercharge Your LLM Application Evaluations 🚀☆13,008Feb 24, 2026Updated 3 weeks ago
- structured outputs for llms☆12,551Updated this week
- Structured Outputs☆13,564Mar 9, 2026Updated 2 weeks ago
- DSPy: The framework for programming—not prompting—language models☆32,853Updated this week
- 🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.☆742Updated this week
- The web framework for building LLM microservices [deprecated]☆994Jul 6, 2024Updated last year
- LlamaIndex is the leading document agent and OCR platform☆47,753Updated this week
- an ambient intelligence library☆6,100Updated this week
- Payloads for Attacking Large Language Models☆130Jan 13, 2026Updated 2 months ago
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,699Oct 23, 2024Updated last year