Make your GenAI Apps Safe & Secure Test & harden your system prompt
☆635Feb 16, 2026Updated 2 weeks ago
Alternatives and similar repositories for ps-fuzz
Users that are interested in ps-fuzz are comparing it to the libraries listed below
Sorting:
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆459Jan 31, 2024Updated 2 years ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,500Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆344Feb 12, 2024Updated 2 years ago
- the LLM vulnerability scanner☆7,088Feb 25, 2026Updated last week
- The Security Toolkit for LLM Interactions☆2,620Dec 15, 2025Updated 2 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆118Feb 7, 2024Updated 2 years ago
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,790Feb 3, 2026Updated last month
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆458Feb 26, 2024Updated 2 years ago
- The automated prompt injection framework for LLM-integrated applications.☆255Sep 12, 2024Updated last year
- Protection against Model Serialization Attacks☆647Feb 18, 2026Updated 2 weeks ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,537Aug 20, 2025Updated 6 months ago
- Every practical and proposed defense against prompt injection.☆645Feb 22, 2025Updated last year
- LLM Prompt Injection Detector☆1,426Aug 7, 2024Updated last year
- a security scanner for custom LLM applications☆1,140Dec 1, 2025Updated 3 months ago
- New ways of breaking app-integrated LLMs☆2,055Jul 17, 2025Updated 7 months ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆912Jul 18, 2025Updated 7 months ago
- Payloads for Attacking Large Language Models☆127Jan 13, 2026Updated last month
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,121Feb 22, 2026Updated 2 weeks ago
- Set of tools to assess and improve LLM security.☆4,051Updated this week
- ☆75Mar 19, 2025Updated 11 months ago
- Add a layer of active defense to your cloud applications.☆104Updated this week
- The fastest Trust Layer for AI Agents☆152Feb 3, 2026Updated last month
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆1,229Feb 6, 2026Updated last month
- ☆375Jun 25, 2025Updated 8 months ago
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆1,206Dec 29, 2025Updated 2 months ago
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,141Feb 27, 2026Updated last week
- LLM powered fuzzing via OSS-Fuzz.☆1,365Updated this week
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆30Apr 23, 2024Updated last year
- Prevent merging of malicious code in pull requests☆253Jan 8, 2026Updated last month
- ☆29Nov 14, 2025Updated 3 months ago
- Detection of malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers.☆20Oct 30, 2024Updated last year
- An AI-powered threat modeling tool that leverages OpenAI's GPT models to generate threat models for a given application based on the STRI…☆991Updated this week
- LLM | Security | Operations in one github repo with good links and pictures.☆90Feb 9, 2026Updated 3 weeks ago
- blint is a Binary Linter that checks the security properties and capabilities of your executables. It can also generate a Software Bill-o…☆434Feb 5, 2026Updated last month
- ☆11Jun 7, 2025Updated 9 months ago
- One Conference 2024☆111Oct 1, 2024Updated last year
- ☆701Jul 2, 2025Updated 8 months ago
- ☆50Aug 3, 2024Updated last year
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆27Nov 4, 2024Updated last year