New ways of breaking app-integrated LLMs
☆2,064Jul 17, 2025Updated 8 months ago
Alternatives and similar repositories for llm-security
Users that are interested in llm-security are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A curation of awesome tools, documents and projects about LLM Security.☆1,563Aug 20, 2025Updated 7 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,601Aug 2, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆473Feb 26, 2024Updated 2 years ago
- the LLM vulnerability scanner☆7,452Apr 3, 2026Updated last week
- LLM Prompt Injection Detector☆1,458Aug 7, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Papers and resources related to the security and privacy of LLMs 🤖☆568Jun 8, 2025Updated 10 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,186Feb 22, 2026Updated last month
- Making LLMs generate entire projects. Go from idea to runnable project in one step.☆34Feb 12, 2023Updated 3 years ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆470Jan 31, 2024Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆576Feb 27, 2026Updated last month
- Dropbox LLM Security research code and results☆258May 21, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆422Oct 29, 2025Updated 5 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆355Oct 17, 2025Updated 5 months ago
- The Security Toolkit for LLM Interactions☆2,794Dec 15, 2025Updated 3 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆348Feb 12, 2024Updated 2 years ago
- a security scanner for custom LLM applications☆1,173Dec 1, 2025Updated 4 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆117Apr 15, 2024Updated last year
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,943Updated this week
- ☆716Jul 2, 2025Updated 9 months ago
- Set of tools to assess and improve LLM security.☆4,112Mar 31, 2026Updated last week
- Payloads for Attacking Large Language Models☆130Jan 13, 2026Updated 2 months ago
- Prompt Injection Primer for Engineers☆579Aug 25, 2023Updated 2 years ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆652Mar 16, 2026Updated 3 weeks ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,660Apr 4, 2026Updated last week
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆832Mar 30, 2026Updated last week
- A guidance language for controlling large language models.☆21,365Mar 18, 2026Updated 3 weeks ago
- Every practical and proposed defense against prompt injection.☆671Feb 22, 2025Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,923Apr 2, 2026Updated last week
- TAP: An automated jailbreaking method for black-box LLMs☆225Dec 10, 2024Updated last year
- Adding guardrails to large language models.☆6,647Apr 3, 2026Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆667Feb 16, 2026Updated last month
- Protection against Model Serialization Attacks☆675Feb 18, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- a CLI that provides a generic automation layer for assessing the security of ML models☆916Jul 18, 2025Updated 8 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆515Mar 30, 2026Updated last week
- Automated Penetration Testing Agentic Framework Powered by Large Language Models☆12,417Feb 23, 2026Updated last month
- A curated list of large language model tools for cybersecurity research.☆484Apr 10, 2024Updated 2 years ago
- ☆129Jul 2, 2024Updated last year
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 6 months ago
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Aug 21, 2023Updated 2 years ago