microsoft / gandalf_vs_gandalf
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
☆29Updated last year
Alternatives and similar repositories for gandalf_vs_gandalf:
Users that are interested in gandalf_vs_gandalf are comparing it to the libraries listed below
- ☆35Updated 2 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆49Updated last year
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆46Updated last year
- A text embedding viewer for the Jupyter environment☆19Updated last year
- source for llmsec.net☆15Updated 9 months ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆21Updated last month
- Red-Teaming Language Models with DSPy☆183Updated 2 months ago
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆201Updated last year
- ☆16Updated 4 months ago
- A benchmark for prompt injection detection systems.☆100Updated 2 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆72Updated last week
- Payloads for Attacking Large Language Models☆81Updated 9 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆378Updated last year
- Project LLM Verification Standard☆43Updated last year
- Dropbox LLM Security research code and results☆222Updated 11 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆62Updated 10 months ago
- A Python-based tool that monitors dark web sources for mentions of specific organizations for Threat Monitoring.☆15Updated 2 weeks ago
- Lakera - ChatGPT Data Leak Protection☆22Updated 9 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆162Updated last year
- Approximation of the Claude 3 tokenizer by inspecting generation stream☆129Updated 9 months ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆25Updated 3 months ago
- Generative AI Governance for Enterprises☆16Updated 3 months ago
- 🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded…☆19Updated 9 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆74Updated 2 months ago
- An AI-powered tool for discovering privilege escalation opportunities in AWS IAM configurations.☆107Updated 6 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆107Updated 7 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆148Updated 2 years ago
- Test Software for the Characterization of AI Technologies☆246Updated last week
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security pr…☆45Updated 11 months ago
- Scripts and Content for working with Open AI☆160Updated last week