tpai / gandalf-prompt-injection-writeupLinks
A writeup for the Gandalf prompt injection game.
☆37Updated 2 years ago
Alternatives and similar repositories for gandalf-prompt-injection-writeup
Users that are interested in gandalf-prompt-injection-writeup are comparing it to the libraries listed below
Sorting:
- My inputs for the LLM Gandalf made by Lakera☆47Updated 2 years ago
- Codebase of https://arxiv.org/abs/2410.14923☆50Updated 10 months ago
- A benchmark for prompt injection detection systems.☆133Updated 3 weeks ago
- Dropbox LLM Security research code and results☆235Updated last year
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆205Updated last week
- Payloads for Attacking Large Language Models☆99Updated 3 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- This repository contains various attack against Large Language Models.☆114Updated last year
- ☆65Updated last week
- ☆148Updated 2 weeks ago
- ☆44Updated last week
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆185Updated 5 months ago
- Code for the website www.jailbreakchat.com☆104Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆420Updated last year
- LLM security and privacy☆51Updated 11 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆414Updated last year
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆168Updated 2 years ago
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆134Updated 9 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆25Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆84Updated 7 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆114Updated 2 months ago
- ☆62Updated last month
- [Corca / ML] Automatically solved Gandalf AI with LLM☆51Updated 2 years ago
- ☆145Updated 3 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆91Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.☆55Updated 8 months ago
- ATLAS tactics, techniques, and case studies data☆79Updated last month
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆27Updated 10 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆145Updated last year