cyberark / FuzzyAI
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
☆274Updated this week
Alternatives and similar repositories for FuzzyAI:
Users that are interested in FuzzyAI are comparing it to the libraries listed below
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆430Updated 3 months ago
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆495Updated 2 weeks ago
- Test Software for the Characterization of AI Technologies☆236Updated this week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆282Updated last month
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆341Updated 11 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆266Updated 5 months ago
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆197Updated 10 months ago
- Code release for Best-of-N Jailbreaking☆421Updated last month
- a prompt injection scanner for custom LLM applications☆715Updated last week
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆39Updated last year
- CodeGate: CodeGen Privacy and Security☆278Updated this week
- Every practical and proposed defense against prompt injection.☆382Updated 7 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆24Updated 3 weeks ago
- Using Agents To Automate Pentesting☆193Updated last week
- A LLM explicitly designed for getting hacked☆134Updated last year
- ☆62Updated last month
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆60Updated last month
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆133Updated last year
- A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle,…☆256Updated this week
- Prompt Injection Primer for Engineers☆404Updated last year
- OWASP Top 10 for Agentic AI (AI Agent Security) - Pre-release version☆37Updated last week
- Prompt Injections Everywhere☆99Updated 5 months ago
- Dropbox LLM Security research code and results☆219Updated 8 months ago
- This repository contains various attack against Large Language Models.☆87Updated 8 months ago
- OWASP Foundation Web Respository☆230Updated this week
- The system consists of multiple AI agents that collaborate to strategize, generate commands, and execute scans based on the client's desc…☆33Updated 9 months ago
- ☆213Updated 2 weeks ago
- ☆34Updated last month
- Protection against Model Serialization Attacks☆375Updated last week
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆17Updated last month