leondz / garak
LLM vulnerability scanner
☆1,273Updated this week
Related projects: ⓘ
- The Security Toolkit for LLM Interactions☆1,131Updated this week
- OWASP Foundation Web Respository☆504Updated last week
- LLM Prompt Injection Detector☆1,067Updated last month
- New ways of breaking app-integrated LLMs☆1,799Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆873Updated 3 weeks ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆360Updated 3 weeks ago
- Every practical and proposed defense against prompt injection.☆310Updated 3 months ago
- automatically tests prompt injection attacks on ChatGPT instances☆612Updated 9 months ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals an…☆1,721Updated last week
- ☆357Updated last month
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆218Updated 7 months ago
- Dropbox LLM Security research code and results☆210Updated 3 months ago
- Prompt Injection Primer for Engineers☆348Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆293Updated 6 months ago
- Protection against Model Serialization Attacks☆273Updated this week
- A benchmark for prompt injection detection systems.☆80Updated last week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆220Updated last month
- Agentic LLM Vulnerability Scanner / AI red teaming kit☆684Updated last week
- A curated list of large language model tools for cybersecurity research.☆376Updated 5 months ago
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆538Updated 2 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆366Updated 5 months ago
- Inspect: A framework for large language model evaluations☆546Updated this week
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆275Updated last month
- Universal and Transferable Attacks on Aligned Language Models☆3,282Updated last month
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆181Updated last month
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆392Updated 2 weeks ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆816Updated last month
- Papers and resources related to the security and privacy of LLMs 🤖☆393Updated last week
- ☆164Updated 8 months ago