ethz-spylab / agentdojoView external linksLinks
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
☆431Feb 3, 2026Updated last week
Alternatives and similar repositories for agentdojo
Users that are interested in agentdojo are comparing it to the libraries listed below
Sorting:
- ☆115Jul 2, 2024Updated last year
- Agent Security Bench (ASB)☆182Oct 27, 2025Updated 3 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆84Jul 24, 2025Updated 6 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆63Nov 10, 2025Updated 3 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Feb 19, 2025Updated 11 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆391Oct 29, 2025Updated 3 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 4 months ago
- ☆27Sep 11, 2025Updated 5 months ago
- PFI: Prompt Flow Integrity to Prevent Privilege Escalation in LLM Agents☆26Mar 26, 2025Updated 10 months ago
- ☆30Mar 12, 2025Updated 11 months ago
- Dataset for the Tensor Trust project☆48Mar 17, 2024Updated last year
- A benchmark for prompt injection detection systems.☆159Dec 16, 2025Updated last month
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆317May 13, 2025Updated 9 months ago
- [EMNLP 2025 Oral] IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents☆16Sep 16, 2025Updated 4 months ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆197Apr 12, 2025Updated 10 months ago
- ☆13Mar 9, 2025Updated 11 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆246Jun 20, 2025Updated 7 months ago
- Official implementation of the WASP web agent security benchmark☆67Aug 12, 2025Updated 6 months ago
- ☆28Aug 31, 2025Updated 5 months ago
- ☆22May 28, 2025Updated 8 months ago
- Code for the API, workload execution, and agents underlying the LLMail-Inject Adpative Prompt Injection Challenge☆19Oct 21, 2025Updated 3 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,856Jan 24, 2026Updated 3 weeks ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆527Apr 4, 2025Updated 10 months ago
- ☆691Jul 2, 2025Updated 7 months ago
- ☆35May 21, 2025Updated 8 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆47Jan 12, 2026Updated last month
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Jan 22, 2025Updated last year
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Sep 17, 2025Updated 4 months ago
- Every practical and proposed defense against prompt injection.☆630Feb 22, 2025Updated 11 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆68Oct 23, 2024Updated last year
- ☆99Aug 11, 2025Updated 6 months ago
- ☆37Oct 2, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆847Aug 16, 2024Updated last year
- Fluent student-teacher redteaming☆23Jul 25, 2024Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆233Jan 27, 2026Updated 2 weeks ago
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆29Jun 24, 2025Updated 7 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Mar 12, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆50Dec 23, 2024Updated last year