mnns / LLMFuzzer
🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥
☆233Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for LLMFuzzer
- The automated prompt injection framework for LLM-integrated applications.☆163Updated 2 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆315Updated 9 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆552Updated this week
- A curated list of large language model tools for cybersecurity research.☆395Updated 7 months ago
- Dropbox LLM Security research code and results☆217Updated 6 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆183Updated 5 months ago
- Every practical and proposed defense against prompt injection.☆347Updated 5 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆246Updated last month
- ☆181Updated 10 months ago
- A collection of awesome resources related AI security☆131Updated 7 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆146Updated 2 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆149Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆403Updated last month
- OWASP Foundation Web Respository☆578Updated this week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆248Updated 3 months ago
- CTF challenges designed and implemented in machine learning applications☆114Updated 2 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago
- A benchmark for prompt injection detection systems.☆87Updated 2 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆313Updated 8 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆44Updated last week
- ☆94Updated last month
- Protection against Model Serialization Attacks☆319Updated this week
- A LLM explicitly designed for getting hacked☆130Updated last year
- ☆63Updated this week
- Payloads for Attacking Large Language Models☆64Updated 4 months ago
- DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection (RAID 2023) https://surrealyz.github.io/…☆105Updated 3 weeks ago
- ☆26Updated last year
- some prompt about cyber security☆154Updated last year
- XBOW Validation Benchmarks☆53Updated 2 months ago