⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
☆465Jan 31, 2024Updated 2 years ago
Alternatives and similar repositories for vigil-llm
Users that are interested in vigil-llm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Security Toolkit for LLM Interactions☆2,699Dec 15, 2025Updated 3 months ago
- LLM Prompt Injection Detector☆1,445Aug 7, 2024Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆347Feb 12, 2024Updated 2 years ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆652Feb 16, 2026Updated last month
- Lambda function that streamlines containment of an AWS account compromise☆344Dec 1, 2023Updated 2 years ago
- A collection of prompt injection mitigation techniques.☆28Aug 19, 2023Updated 2 years ago
- PolarDNS is a specialized authoritative DNS server suitable for penetration testing and vulnerability research.☆235Jul 8, 2025Updated 8 months ago
- autoredteam: code for training models that automatically red team other language models☆15Aug 9, 2023Updated 2 years ago
- SessionProbe is a multi-threaded tool designed for penetration testing and bug bounty hunting. It evaluates user privileges in web applic…☆463Mar 28, 2024Updated last year
- the LLM vulnerability scanner☆7,312Updated this week
- Small tools to assist with using Large Language Models☆12Nov 7, 2023Updated 2 years ago
- Every practical and proposed defense against prompt injection.☆659Feb 22, 2025Updated last year
- a security scanner for custom LLM applications☆1,149Dec 1, 2025Updated 3 months ago
- New ways of breaking app-integrated LLMs☆2,063Jul 17, 2025Updated 8 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,556Mar 16, 2026Updated last week
- An offensive data enrichment pipeline☆943Mar 10, 2026Updated last week
- Detection of malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers.☆20Oct 30, 2024Updated last year
- A Python library for guardrail models evaluation.☆34Oct 9, 2025Updated 5 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- Payloads for Attacking Large Language Models☆130Jan 13, 2026Updated 2 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆86Feb 6, 2025Updated last year
- ☆75Mar 19, 2025Updated last year
- Your Everyday Threat Intelligence☆1,959Mar 16, 2026Updated last week
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆153Feb 4, 2026Updated last month
- Protection against Model Serialization Attacks☆657Feb 18, 2026Updated last month
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- ☆704Jul 2, 2025Updated 8 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆409Oct 29, 2025Updated 4 months ago
- Set of tools to assess and improve LLM security.☆4,077Updated this week
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,152Feb 22, 2026Updated last month
- The automated prompt injection framework for LLM-integrated applications.☆258Sep 12, 2024Updated last year
- Data Scientists Go To Jupyter☆68Mar 3, 2025Updated last year
- The fastest Trust Layer for AI Agents☆152Feb 3, 2026Updated last month
- ☆39May 21, 2024Updated last year
- Prompt Injection Attacks against GPT-4, Gemini, Azure, Azure with Jailbreak☆29Oct 8, 2024Updated last year
- Logging Made Easy (LME) is a no cost, open source platform that centralizes log collection, enhances threat detection, and enables real-t…☆1,387Mar 13, 2026Updated last week
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- ☆28Mar 20, 2024Updated 2 years ago