tpai / gandalf-prompt-injection-writeupLinks
A writeup for the Gandalf prompt injection game.
☆36Updated 2 years ago
Alternatives and similar repositories for gandalf-prompt-injection-writeup
Users that are interested in gandalf-prompt-injection-writeup are comparing it to the libraries listed below
Sorting:
- LLM security and privacy☆48Updated 8 months ago
- My inputs for the LLM Gandalf made by Lakera☆43Updated last year
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆30Updated last week
- Curation of prompts that are known to be adversarial to large language models☆179Updated 2 years ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆88Updated last year
- Payloads for Attacking Large Language Models☆90Updated 3 weeks ago
- Dropbox LLM Security research code and results☆227Updated last year
- Codebase of https://arxiv.org/abs/2410.14923☆48Updated 8 months ago
- ☆65Updated 5 months ago
- General research for Dreadnode☆23Updated last year
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆172Updated 2 months ago
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆23Updated 7 months ago
- Automated Safety Testing of Large Language Models☆15Updated 4 months ago
- A benchmark for prompt injection detection systems.☆120Updated last month
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆501Updated 9 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- ☆56Updated last month
- A collection of prompt injection mitigation techniques.☆22Updated last year
- Code for the website www.jailbreakchat.com☆96Updated last year
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆61Updated last week
- [Corca / ML] Automatically solved Gandalf AI with LLM☆50Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆110Updated last year
- ☆74Updated 7 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆69Updated last month
- ☆68Updated last month
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆381Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆319Updated 5 months ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆40Updated last year
- ☆34Updated 7 months ago