cyberark / FuzzyAI
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
☆519Updated 3 weeks ago
Alternatives and similar repositories for FuzzyAI:
Users that are interested in FuzzyAI are comparing it to the libraries listed below
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆378Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆469Updated 6 months ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆158Updated 3 weeks ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆274Updated last year
- Automated web vulnerability scanning with LLM agents☆305Updated last month
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆560Updated this week
- A toolset repository for AI agents☆69Updated this week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆279Updated 8 months ago
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆95Updated this week
- A collection of awesome resources related AI security☆206Updated this week
- Protection against Model Serialization Attacks☆462Updated this week
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆201Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆313Updated 4 months ago
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆101Updated last week
- Code snippets to reproduce MCP tool poisoning attacks.☆93Updated 2 weeks ago
- Learn about a type of vulnerability that specifically targets machine learning models☆258Updated 10 months ago
- A powerful AI observability framework that provides comprehensive insights into agent interactions across platforms, enabling developers …☆69Updated 2 weeks ago
- Dropbox LLM Security research code and results☆222Updated 11 months ago
- A security scanner for your LLM agentic workflows☆442Updated this week
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆228Updated this week
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking cour…☆59Updated 2 weeks ago
- Code release for Best-of-N Jailbreaking☆480Updated 2 months ago
- Payloads for Attacking Large Language Models☆81Updated 9 months ago
- Integrate PyRIT in existing tools☆22Updated last month
- Using Agents To Automate Pentesting☆264Updated 3 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆23Updated 11 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆589Updated 3 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆106Updated 4 months ago
- some prompt about cyber security☆201Updated last year
- Every practical and proposed defense against prompt injection.☆424Updated 2 months ago