[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"
☆199Apr 12, 2025Updated 10 months ago
Alternatives and similar repositories for AgentPoison
Users that are interested in AgentPoison are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆66Nov 14, 2025Updated 3 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆234Jan 27, 2026Updated last month
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Agent Security Bench (ASB)☆186Oct 27, 2025Updated 4 months ago
- ☆37Oct 2, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆395Oct 29, 2025Updated 4 months ago
- ☆34Dec 9, 2025Updated 2 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆443Feb 3, 2026Updated 3 weeks ago
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- ☆23Oct 25, 2024Updated last year
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆83Nov 3, 2024Updated last year
- Plancraft is a minecraft environment and agent suite to test planning capabilities in LLMs☆26Nov 7, 2025Updated 3 months ago
- ☆17Jun 18, 2025Updated 8 months ago
- ☆21Jul 21, 2025Updated 7 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 4 months ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆276Feb 2, 2026Updated 3 weeks ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated 11 months ago
- ☆47Updated this week
- ☆38Oct 12, 2025Updated 4 months ago
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆32Jun 24, 2025Updated 8 months ago
- ☆175Oct 31, 2025Updated 4 months ago
- ☆18Jun 15, 2021Updated 4 years ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,865Jan 24, 2026Updated last month
- ☆28Feb 27, 2025Updated last year
- ☆23Jan 17, 2025Updated last year
- [EMNLP'24] EHRAgent: Code Empowers Large Language Models for Complex Tabular Reasoning on Electronic Health Records☆124Dec 26, 2024Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆56Mar 22, 2025Updated 11 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆227Dec 22, 2024Updated last year
- Martingale posterior neural networks for fast sequential decision making @ Neurips 2025☆23Nov 13, 2025Updated 3 months ago
- [ICLR 2025 Spotlight] The official implementation of our ICLR2025 paper "AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to…☆347Oct 8, 2025Updated 4 months ago
- The official repository for guided jailbreak benchmark☆28Jul 28, 2025Updated 7 months ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents☆24Aug 23, 2024Updated last year
- ☆20Mar 14, 2022Updated 3 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆568Updated this week