mnns / LLMFuzzerLinks
π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. ππ₯
β316Updated last year
Alternatives and similar repositories for LLMFuzzer
Users that are interested in LLMFuzzer are comparing it to the libraries listed below
Sorting:
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ417Updated last year
- The automated prompt injection framework for LLM-integrated applications.β230Updated last year
- CTF challenges designed and implemented in machine learning applicationsβ169Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β628Updated last month
- A curated list of large language model tools for cybersecurity research.β477Updated last year
- β88Updated last week
- XBOW Validation Benchmarksβ245Updated 3 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β302Updated last year
- A collection of awesome resources related AI securityβ319Updated 2 weeks ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β372Updated 2 months ago
- Dropbox LLM Security research code and resultsβ235Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ529Updated last year
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ96Updated 2 months ago
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ102Updated last month
- Protection against Model Serialization Attacksβ577Updated last week
- A benchmark for prompt injection detection systems.β142Updated last month
- Payloads for Attacking Large Language Modelsβ101Updated 4 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ568Updated last week
- Every practical and proposed defense against prompt injection.β555Updated 7 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)β914Updated this week
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β63Updated 3 months ago
- β68Updated 2 months ago
- Prompt Injection Primer for Engineersβ460Updated 2 years ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents onβ¦β70Updated last week
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)β87Updated 8 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β114Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β165Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ92Updated 8 months ago
- A LLM explicitly designed for getting hackedβ162Updated 2 years ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.β293Updated this week