mnns / LLMFuzzerLinks
π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. ππ₯
β303Updated last year
Alternatives and similar repositories for LLMFuzzer
Users that are interested in LLMFuzzer are comparing it to the libraries listed below
Sorting:
- The automated prompt injection framework for LLM-integrated applications.β220Updated 10 months ago
- A curated list of large language model tools for cybersecurity research.β468Updated last year
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ69Updated 2 weeks ago
- Protection against Model Serialization Attacksβ536Updated 2 weeks ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ400Updated last year
- CTF challenges designed and implemented in machine learning applicationsβ162Updated 11 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β615Updated 2 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β63Updated last month
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ530Updated this week
- XBOW Validation Benchmarksβ200Updated last month
- Dropbox LLM Security research code and resultsβ231Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ512Updated 10 months ago
- Payloads for Attacking Large Language Modelsβ92Updated 2 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)β81Updated 6 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ89Updated last week
- The repository of VulnBot: Autonomous Penetration Testing for A Multi-Agent Collaborative Framework.β86Updated 3 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aβ¦β399Updated last year
- β63Updated 2 months ago
- A benchmark for prompt injection detection systems.β124Updated 2 weeks ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β293Updated 11 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β113Updated last year
- future-proof vulnerability detection benchmark, based on CVEs in open-source reposβ59Updated last week
- An autonomous LLM-agent for large-scale, repository-level code auditingβ186Updated 2 weeks ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β340Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β163Updated last year
- Every practical and proposed defense against prompt injection.β503Updated 5 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ84Updated 6 months ago
- A collection of awesome resources related AI securityβ271Updated this week
- CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Softwareβ264Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.β33Updated 7 months ago