google-research / camel-prompt-injectionLinks
Code for the paper "Defeating Prompt Injections by Design"
☆125Updated 3 months ago
Alternatives and similar repositories for camel-prompt-injection
Users that are interested in camel-prompt-injection are comparing it to the libraries listed below
Sorting:
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Codebase of https://arxiv.org/abs/2410.14923☆51Updated 11 months ago
- ☆153Updated 3 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆181Updated 6 months ago
- ☆68Updated 2 months ago
- LLM proxy to observe and debug what your AI agents are doing.☆49Updated 2 months ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆186Updated this week
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆77Updated 5 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆140Updated 9 months ago
- A benchmark for prompt injection detection systems.☆142Updated last month
- Lightweight LLM Interaction Framework☆381Updated this week
- A utility to inspect, validate, sign and verify machine learning model files.☆58Updated 8 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆114Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆31Updated last year
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆211Updated last month
- Red-Teaming Language Models with DSPy☆216Updated 7 months ago
- MCP security wrapper☆193Updated last month
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆79Updated last week
- LLM | Security | Operations in one github repo with good links and pictures.☆58Updated 9 months ago
- ☆14Updated last year
- Dropbox LLM Security research code and results☆235Updated last year
- https://arxiv.org/abs/2412.02776☆62Updated 10 months ago
- ☆149Updated last month
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆66Updated last month
- Use LLMs for document ranking☆148Updated 5 months ago
- ☆76Updated this week
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆27Updated last year
- ☆69Updated 3 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆84Updated last year