tpai / gandalf-prompt-injection-writeupLinks
A writeup for the Gandalf prompt injection game.
☆39Updated 2 years ago
Alternatives and similar repositories for gandalf-prompt-injection-writeup
Users that are interested in gandalf-prompt-injection-writeup are comparing it to the libraries listed below
Sorting:
- Payloads for Attacking Large Language Models☆106Updated 5 months ago
- My inputs for the LLM Gandalf made by Lakera☆48Updated 2 years ago
- Dropbox LLM Security research code and results☆245Updated last year
- A benchmark for prompt injection detection systems.☆150Updated 3 months ago
- This repository contains various attack against Large Language Models.☆121Updated last year
- https://arxiv.org/abs/2412.02776☆66Updated 11 months ago
- Source code of "TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification", ACL2024 (findings)☆13Updated last year
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- ☆49Updated last week
- ☆65Updated 2 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆52Updated last year
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆27Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆84Updated 9 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆191Updated last month
- using ML models for red teaming☆44Updated 2 years ago
- LLM | Security | Operations in one github repo with good links and pictures.☆67Updated 10 months ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆218Updated 2 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆176Updated 2 years ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆327Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆150Updated 11 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆151Updated 5 months ago
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆61Updated last week
- Code snippets to reproduce MCP tool poisoning attacks.☆187Updated 7 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆430Updated last year
- ☆173Updated 5 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 6 months ago
- ☆24Updated 2 years ago
- Code for the website www.jailbreakchat.com☆110Updated 2 years ago