prompt-security / ps-fuzz
Make your GenAI Apps Safe & Secure Test & harden your system prompt
☆445Updated 4 months ago
Alternatives and similar repositories for ps-fuzz:
Users that are interested in ps-fuzz are comparing it to the libraries listed below
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆355Updated last year
- Dropbox LLM Security research code and results☆221Updated 9 months ago
- Protection against Model Serialization Attacks☆422Updated this week
- Every practical and proposed defense against prompt injection.☆398Updated 2 weeks ago
- OWASP Foundation Web Respository☆678Updated this week
- OWASP Foundation Web Respository☆242Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆262Updated last year
- LLM Prompt Injection Detector☆1,204Updated 7 months ago
- Test Software for the Characterization of AI Technologies☆241Updated this week
- OWASP Top 10 for Agentic AI (AI Agent Security) - Pre-release version☆63Updated this week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆293Updated 2 months ago
- ☆363Updated 10 months ago
- The Security Toolkit for LLM Interactions☆1,483Updated this week
- A curated list of large language model tools for cybersecurity research.☆434Updated 11 months ago
- ☆225Updated last month
- a prompt injection scanner for custom LLM applications☆755Updated this week
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆427Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆160Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated last year
- Prompt Injection Primer for Engineers☆421Updated last year
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆273Updated 6 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆69Updated last month
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆65Updated 2 months ago
- Red-Teaming Language Models with DSPy☆171Updated last month
- A collection of awesome resources related AI security☆185Updated last month
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆209Updated 6 months ago
- All things specific to LLM Red Teaming Generative AI☆23Updated 4 months ago
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Gemini☆160Updated 11 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆581Updated 2 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆231Updated 8 months ago