New ways of breaking app-integrated LLMs
☆2,063Jul 17, 2025Updated 8 months ago
Alternatives and similar repositories for llm-security
Users that are interested in llm-security are comparing it to the libraries listed below
Sorting:
- A curation of awesome tools, documents and projects about LLM Security.☆1,548Aug 20, 2025Updated 7 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- the LLM vulnerability scanner☆7,312Updated this week
- LLM Prompt Injection Detector☆1,445Aug 7, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆567Jun 8, 2025Updated 9 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,152Feb 22, 2026Updated last month
- Making LLMs generate entire projects. Go from idea to runnable project in one step.☆34Feb 12, 2023Updated 3 years ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆465Jan 31, 2024Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆409Oct 29, 2025Updated 4 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆352Oct 17, 2025Updated 5 months ago
- The Security Toolkit for LLM Interactions☆2,699Dec 15, 2025Updated 3 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆347Feb 12, 2024Updated 2 years ago
- a security scanner for custom LLM applications☆1,149Dec 1, 2025Updated 3 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆112Apr 15, 2024Updated last year
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,819Updated this week
- Payloads for Attacking Large Language Models☆130Jan 13, 2026Updated 2 months ago
- ☆704Jul 2, 2025Updated 8 months ago
- Set of tools to assess and improve LLM security.☆4,077Updated this week
- Prompt Injection Primer for Engineers☆577Aug 25, 2023Updated 2 years ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆650Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,556Updated this week
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- A guidance language for controlling large language models.☆21,346Mar 13, 2026Updated last week
- Every practical and proposed defense against prompt injection.☆659Feb 22, 2025Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆652Feb 16, 2026Updated last month
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,899Updated this week
- Protection against Model Serialization Attacks☆657Feb 18, 2026Updated last month
- Adding guardrails to large language models.☆6,553Updated this week
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- a CLI that provides a generic automation layer for assessing the security of ML models☆914Jul 18, 2025Updated 8 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆488Mar 12, 2026Updated last week
- Automated Penetration Testing Agentic Framework Powered by Large Language Models☆12,102Feb 23, 2026Updated 3 weeks ago
- A curated list of large language model tools for cybersecurity research.☆483Apr 10, 2024Updated last year
- ☆121Jul 2, 2024Updated last year
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 5 months ago
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Aug 21, 2023Updated 2 years ago