microsoft / gandalf_vs_gandalf
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
☆29Updated last year
Alternatives and similar repositories for gandalf_vs_gandalf
Users that are interested in gandalf_vs_gandalf are comparing it to the libraries listed below
Sorting:
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆47Updated last year
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆202Updated last year
- [Corca / ML] Automatically solved Gandalf AI with LLM☆50Updated last year
- Dropbox LLM Security research code and results☆225Updated 11 months ago
- Red-Teaming Language Models with DSPy☆192Updated 3 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆136Updated last year
- ☆39Updated last week
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆22Updated 2 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆47Updated 6 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆475Updated 7 months ago
- Project LLM Verification Standard☆43Updated last year
- ☆40Updated last week
- A collection of prompt injection mitigation techniques.☆22Updated last year
- Lakera - ChatGPT Data Leak Protection☆22Updated 10 months ago
- ATLAS tactics, techniques, and case studies data☆71Updated 3 weeks ago
- ☆59Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆299Updated 7 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆380Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆369Updated last year
- source for llmsec.net☆15Updated 9 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆161Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆108Updated last year
- ☆100Updated 2 months ago
- Zero Trust Agent☆20Updated 2 weeks ago
- Payloads for Attacking Large Language Models☆85Updated 10 months ago
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆35Updated 2 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆109Updated 5 months ago
- Test Software for the Characterization of AI Technologies☆248Updated last week
- Top 10 for Agentic AI (AI Agent Security)☆99Updated 2 months ago
- OWASP Machine Learning Security Top 10 Project☆85Updated 3 months ago