royweiss1 / GPT_KeyloggerLinks
This is the official repository for the code used in the paper: "What Was Your Prompt? A Remote Keylogging Attack on AI Assistants", USENIX Security 24'
☆54Updated 7 months ago
Alternatives and similar repositories for GPT_Keylogger
Users that are interested in GPT_Keylogger are comparing it to the libraries listed below
Sorting:
- Manual Prompt Injection / Red Teaming Tool☆40Updated 11 months ago
- https://arxiv.org/abs/2412.02776☆62Updated 9 months ago
- ☆75Updated 6 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆25Updated last year
- ☆85Updated 4 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆91Updated last year
- All things specific to LLM Red Teaming Generative AI☆28Updated 11 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆95Updated 2 months ago
- A benchmark for prompt injection detection systems.☆137Updated last month
- LLM | Security | Operations in one github repo with good links and pictures.☆55Updated 8 months ago
- ☆64Updated 2 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆51Updated 11 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆288Updated 2 months ago
- ☆65Updated 2 weeks ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆316Updated last year
- Learn about a type of vulnerability that specifically targets machine learning models☆346Updated 2 weeks ago
- General research for Dreadnode☆25Updated last year
- Payloads for Attacking Large Language Models☆100Updated 3 months ago
- Repository for the Framing Frames publication: security context and transmit queue manipulations, client isolation bypasses, and more.☆47Updated 2 years ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆138Updated 9 months ago
- A LLM explicitly designed for getting hacked☆160Updated 2 years ago
- A comprehensive local Linux Privilege-Escalation Benchmark☆39Updated last week
- LLM prompt attacks for hacker CTFs via CTFd.☆13Updated last year
- Data Scientists Go To Jupyter☆66Updated 6 months ago
- ☆86Updated 10 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated 7 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆77Updated this week
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.☆37Updated 2 years ago