New ways of breaking app-integrated LLMs
☆2,075Jul 17, 2025Updated 9 months ago
Alternatives and similar repositories for llm-security
Users that are interested in llm-security are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A curation of awesome tools, documents and projects about LLM Security.☆1,574Aug 20, 2025Updated 8 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,638Aug 2, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆485Feb 26, 2024Updated 2 years ago
- the LLM vulnerability scanner☆7,639Apr 23, 2026Updated last week
- LLM Prompt Injection Detector☆1,467Aug 7, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Papers and resources related to the security and privacy of LLMs 🤖☆573Jun 8, 2025Updated 10 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,218Updated this week
- Making LLMs generate entire projects. Go from idea to runnable project in one step.☆34Feb 12, 2023Updated 3 years ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆472Jan 31, 2024Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆579Feb 27, 2026Updated 2 months ago
- Dropbox LLM Security research code and results☆257May 21, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆434Oct 29, 2025Updated 6 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆354Oct 17, 2025Updated 6 months ago
- The Security Toolkit for LLM Interactions☆2,892Dec 15, 2025Updated 4 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆346Feb 12, 2024Updated 2 years ago
- a security scanner for custom LLM applications☆1,180Dec 1, 2025Updated 5 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆119Apr 15, 2024Updated 2 years ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆6,056Updated this week
- Set of tools to assess and improve LLM security.☆4,150Apr 24, 2026Updated last week
- ☆728Jul 2, 2025Updated 9 months ago
- Payloads for Attacking Large Language Models☆134Jan 13, 2026Updated 3 months ago
- Prompt Injection Primer for Engineers☆580Aug 25, 2023Updated 2 years ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆655Mar 16, 2026Updated last month
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,762Updated this week
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆843Mar 30, 2026Updated last month
- A guidance language for controlling large language models.☆21,408Apr 10, 2026Updated 3 weeks ago
- Every practical and proposed defense against prompt injection.☆681Feb 22, 2025Updated last year
- a CLI that provides a generic automation layer for assessing the security of ML models☆917Jul 18, 2025Updated 9 months ago
- Adding guardrails to large language models.☆6,777Apr 3, 2026Updated 3 weeks ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,941Apr 2, 2026Updated 3 weeks ago
- TAP: An automated jailbreaking method for black-box LLMs☆228Dec 10, 2024Updated last year
- Protection against Model Serialization Attacks☆686Feb 18, 2026Updated 2 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆675Feb 16, 2026Updated 2 months ago
- Automated Penetration Testing Agentic Framework Powered by Large Language Models☆12,841Feb 23, 2026Updated 2 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆546Mar 30, 2026Updated last month
- A curated list of large language model tools for cybersecurity research.☆486Apr 10, 2024Updated 2 years ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 6 months ago
- ☆132Jul 2, 2024Updated last year
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Aug 21, 2023Updated 2 years ago