StavC / Here-Comes-the-AI-WormLinks
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems
☆221Updated 4 months ago
Alternatives and similar repositories for Here-Comes-the-AI-Worm
Users that are interested in Here-Comes-the-AI-Worm are comparing it to the libraries listed below
Sorting:
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆201Updated 3 months ago
- Lightweight LLM Interaction Framework☆402Updated this week
- Tree of Attacks (TAP) Jailbreaking Implementation☆117Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆152Updated last year
- Red-Teaming Language Models with DSPy☆249Updated 10 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆52Updated last year
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆82Updated 8 months ago
- Dropbox LLM Security research code and results☆250Updated last year
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆58Updated 2 years ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆92Updated this week
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆132Updated 2 weeks ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆93Updated 7 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆151Updated 2 years ago
- ☆126Updated 3 weeks ago
- A prompt injection game to collect data for robust ML research☆65Updated 11 months ago
- A utility to inspect, validate, sign and verify machine learning model files.☆63Updated 11 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- General research for Dreadnode☆27Updated last year
- Repository for CoSAI Workstream 4, Secure Design Patterns for Agentic Systems☆45Updated last month
- Risks and targets for assessing LLMs & LLM vulnerabilities☆33Updated last year
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆43Updated 10 months ago
- https://arxiv.org/abs/2412.02776☆67Updated last year
- Test Software for the Characterization of AI Technologies☆269Updated this week
- A collection of prompt injection mitigation techniques.☆26Updated 2 years ago
- The fastest Trust Layer for AI Agents☆146Updated 7 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆436Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆35Updated last year
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆180Updated 2 years ago
- A benchmark for prompt injection detection systems.☆152Updated 3 weeks ago
- ☆71Updated last month