sshh12 / llm_backdoorLinks
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
☆183Updated 4 months ago
Alternatives and similar repositories for llm_backdoor
Users that are interested in llm_backdoor are comparing it to the libraries listed below
Sorting:
- Lightweight LLM Interaction Framework☆367Updated this week
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆203Updated last year
- Use LLMs for document ranking☆145Updated 4 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆131Updated 8 months ago
- ☆44Updated this week
- Code snippets to reproduce MCP tool poisoning attacks.☆179Updated 4 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆77Updated 3 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆50Updated 10 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆50Updated 9 months ago
- This repository contains various attack against Large Language Models.☆113Updated last year
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆78Updated 3 months ago
- A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle,…☆326Updated this week
- LLM | Security | Operations in one github repo with good links and pictures.☆49Updated 7 months ago
- Dropbox LLM Security research code and results☆233Updated last year
- Code for the paper "Defeating Prompt Injections by Design"☆94Updated 2 months ago
- Red-Teaming Language Models with DSPy☆212Updated 6 months ago
- A very simple open source implementation of Google's Project Naptime☆167Updated 5 months ago
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆706Updated last month
- Repo with random useful scripts, utilities, prompts and stuff☆156Updated 3 weeks ago
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆116Updated last month
- Using Agents To Automate Pentesting☆295Updated 7 months ago
- ✨ Open-source AI hackers for your apps 👨🏻💻☆516Updated last week
- A utility to inspect, validate, sign and verify machine learning model files.☆58Updated 6 months ago
- ☆65Updated this week
- ☆39Updated 3 weeks ago
- DeepTeam is a framework to red team LLMs and LLM systems.☆656Updated this week
- An archive of 0day.today exploits☆156Updated last month
- Repository for CoSAI Workstream 4, Secure Design Patterns for Agentic Systems☆20Updated last month
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆294Updated 3 months ago
- Secure Code Review AI Agent (SeCoRA) - AI SAST☆49Updated 7 months ago