mnns / LLMFuzzer
🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥
☆247Updated 11 months ago
Alternatives and similar repositories for LLMFuzzer:
Users that are interested in LLMFuzzer are comparing it to the libraries listed below
- A curated list of large language model tools for cybersecurity research.☆414Updated 9 months ago
- Dropbox LLM Security research code and results☆219Updated 7 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆339Updated 11 months ago
- Payloads for Attacking Large Language Models☆72Updated 6 months ago
- OWASP Foundation Web Respository☆621Updated this week
- CTF challenges designed and implemented in machine learning applications☆123Updated 4 months ago
- The automated prompt injection framework for LLM-integrated applications.☆177Updated 4 months ago
- Every practical and proposed defense against prompt injection.☆372Updated 7 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆163Updated this week
- A collection of awesome resources related AI security☆154Updated 3 weeks ago
- LLM security and privacy☆43Updated 3 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆210Updated 6 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆433Updated 3 months ago
- A LLM explicitly designed for getting hacked☆134Updated last year
- Protection against Model Serialization Attacks☆361Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆154Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆323Updated 10 months ago
- ☆192Updated last year
- All things specific to LLM Red Teaming Generative AI☆17Updated 2 months ago
- DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection (RAID 2023) https://surrealyz.github.io/…☆117Updated 2 months ago
- XBOW Validation Benchmarks☆59Updated 4 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆47Updated last month
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆569Updated this week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆261Updated 4 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆27Updated last week
- ☆461Updated last month
- A collection of prompt injection mitigation techniques.☆20Updated last year
- ☆114Updated last month
- CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software☆217Updated 5 months ago