mnns / LLMFuzzer
π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. ππ₯
β258Updated last year
Alternatives and similar repositories for LLMFuzzer:
Users that are interested in LLMFuzzer are comparing it to the libraries listed below
- The automated prompt injection framework for LLM-integrated applications.β185Updated 5 months ago
- Dropbox LLM Security research code and resultsβ220Updated 9 months ago
- Every practical and proposed defense against prompt injection.β388Updated 8 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)β38Updated last month
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ350Updated last year
- β34Updated 2 weeks ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β108Updated 11 months ago
- All things specific to LLM Red Teaming Generative AIβ21Updated 3 months ago
- Payloads for Attacking Large Language Modelsβ74Updated 7 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ49Updated 2 weeks ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defensesβ172Updated last month
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β574Updated last month
- XBOW Validation Benchmarksβ71Updated 5 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β272Updated 6 months ago
- β197Updated last year
- A collection of awesome resources related AI securityβ174Updated 2 weeks ago
- Learn about a type of vulnerability that specifically targets machine learning modelsβ220Updated 8 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β290Updated 2 months ago
- A benchmark for prompt injection detection systems.β96Updated 2 weeks ago
- A LLM explicitly designed for getting hackedβ136Updated last year
- A curated list of large language model tools for cybersecurity research.β430Updated 10 months ago
- DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection (RAID 2023) https://surrealyz.github.io/β¦β125Updated 3 months ago
- CTF challenges designed and implemented in machine learning applicationsβ131Updated 5 months ago
- β26Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilitiesβ30Updated 8 months ago
- Protection against Model Serialization Attacksβ398Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ435Updated 4 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ62Updated 3 weeks ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β158Updated last year
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publisβ¦β58Updated last year