[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"
☆211Apr 12, 2025Updated 11 months ago
Alternatives and similar repositories for AgentPoison
Users that are interested in AgentPoison are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆71Nov 14, 2025Updated 4 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆250Jan 27, 2026Updated 2 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- Agent Security Bench (ASB)☆214Oct 27, 2025Updated 5 months ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆422Oct 29, 2025Updated 5 months ago
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- ☆41Dec 9, 2025Updated 4 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 6 months ago
- ☆37Oct 2, 2024Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆515Mar 30, 2026Updated last week
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆117Mar 26, 2024Updated 2 years ago
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- (NeurIPS 2025) Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"☆48Jun 3, 2025Updated 10 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆50Jul 24, 2024Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆287Mar 13, 2026Updated 3 weeks ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆28Mar 15, 2025Updated last year
- RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents☆24Aug 23, 2024Updated last year
- ☆21Jan 6, 2025Updated last year
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆86Nov 3, 2024Updated last year
- ☆18Jun 18, 2025Updated 9 months ago
- Universal preflight security scanner for AI coding agents — Detects hooks injection, credential exfiltration & backdoors in .cursorrules,…☆55Updated this week
- [S&P 2026] SoK: Evaluating Jailbreak Guardrails for Large Language Models☆37Dec 17, 2025Updated 3 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆31Feb 27, 2025Updated last year
- ☆182Oct 31, 2025Updated 5 months ago
- ☆23Oct 25, 2024Updated last year
- ☆43Oct 12, 2025Updated 5 months ago
- Source code for the ACL'2025 paper titled "Unveiling privacy risks in llm agent memory"☆28Dec 2, 2025Updated 4 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,923Apr 2, 2026Updated last week
- ☆52Feb 8, 2025Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆55Jun 2, 2025Updated 10 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆61Jan 15, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆438Jan 22, 2025Updated last year
- ☆14Mar 9, 2025Updated last year
- Skill-Inject: Measuring Agent Vulnerability to Skill File Attacks☆44Mar 26, 2026Updated 2 weeks ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆226Dec 22, 2024Updated last year
- ☆18Jun 15, 2021Updated 4 years ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆57Mar 22, 2025Updated last year
- ☆16Sep 17, 2024Updated last year