[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"
☆203Apr 12, 2025Updated 11 months ago
Alternatives and similar repositories for AgentPoison
Users that are interested in AgentPoison are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆67Nov 14, 2025Updated 4 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆244Jan 27, 2026Updated last month
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆111Sep 27, 2024Updated last year
- Agent Security Bench (ASB)☆201Oct 27, 2025Updated 4 months ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆409Oct 29, 2025Updated 4 months ago
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- ☆41Dec 9, 2025Updated 3 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 5 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆488Mar 12, 2026Updated last week
- ☆37Oct 2, 2024Updated last year
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- (NeurIPS 2025) Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"☆47Jun 3, 2025Updated 9 months ago
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆50Jul 24, 2024Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆281Mar 13, 2026Updated last week
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆27Mar 15, 2025Updated last year
- RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents☆24Aug 23, 2024Updated last year
- Skill-Inject: Measuring Agent Vulnerability to Skill File Attacks☆34Feb 24, 2026Updated 3 weeks ago
- ☆20Jan 6, 2025Updated last year
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆85Nov 3, 2024Updated last year
- ☆18Jun 18, 2025Updated 9 months ago
- ☆29Feb 27, 2025Updated last year
- [S&P 2026] SoK: Evaluating Jailbreak Guardrails for Large Language Models☆35Dec 17, 2025Updated 3 months ago
- ☆40Oct 12, 2025Updated 5 months ago
- ☆181Oct 31, 2025Updated 4 months ago
- ☆23Oct 25, 2024Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,899Updated this week
- ☆26Oct 27, 2025Updated 4 months ago
- ☆52Feb 8, 2025Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆53Jun 2, 2025Updated 9 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆60Jan 15, 2025Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- ☆14Mar 9, 2025Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆228Dec 22, 2024Updated last year
- [EMNLP'24] EHRAgent: Code Empowers Large Language Models for Complex Tabular Reasoning on Electronic Health Records☆127Dec 26, 2024Updated last year
- ☆18Jun 15, 2021Updated 4 years ago
- Official release of code for the paper RL is a hammer and LLMs are nails A simple RL approach to stronger prompt injection attacks☆42Feb 11, 2026Updated last month
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆57Mar 22, 2025Updated last year