mnns / LLMFuzzerLinks
π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. ππ₯
β307Updated last year
Alternatives and similar repositories for LLMFuzzer
Users that are interested in LLMFuzzer are comparing it to the libraries listed below
Sorting:
- The automated prompt injection framework for LLM-integrated applications.β221Updated 11 months ago
- A curated list of large language model tools for cybersecurity research.β469Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ409Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.β621Updated 2 weeks ago
- CTF challenges designed and implemented in machine learning applicationsβ165Updated 11 months ago
- Dropbox LLM Security research code and resultsβ233Updated last year
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ78Updated last month
- A collection of awesome resources related AI securityβ285Updated this week
- Protection against Model Serialization Attacksβ547Updated 2 weeks ago
- A benchmark for prompt injection detection systems.β125Updated last month
- Payloads for Attacking Large Language Modelsβ96Updated 2 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β355Updated 3 weeks ago
- The repository of VulnBot: Autonomous Penetration Testing for A Multi-Agent Collaborative Framework.β89Updated 4 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ93Updated 3 weeks ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ519Updated 11 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β63Updated 2 months ago
- XBOW Validation Benchmarksβ214Updated 2 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aβ¦β406Updated last year
- β72Updated 3 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)β83Updated 7 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β296Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ547Updated 3 weeks ago
- future-proof vulnerability detection benchmark, based on CVEs in open-source reposβ59Updated this week
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β24Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β113Updated last year
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)β863Updated 2 weeks ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β164Updated last year
- β141Updated 2 months ago
- Every practical and proposed defense against prompt injection.β528Updated 6 months ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents onβ¦β52Updated 3 weeks ago