☆134Aug 11, 2025Updated 8 months ago
Alternatives and similar repositories for Agent-SafetyBench
Users that are interested in Agent-SafetyBench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Agent Security Bench (ASB)☆231Apr 16, 2026Updated 2 weeks ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆102Jan 11, 2026Updated 3 months ago
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆40Aug 4, 2025Updated 9 months ago
- Code implementation of R^2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning☆22Jul 8, 2024Updated last year
- ☆25Jan 17, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ACL 2025] LongSafety: Evaluating Long-Context Safety of Large Language Models☆16Jun 18, 2025Updated 10 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆548Mar 30, 2026Updated last month
- ☆39Oct 15, 2024Updated last year
- ☆23Oct 11, 2024Updated last year
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 9 months ago
- Example agents for the Dreadnode platform☆33Dec 19, 2025Updated 4 months ago
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols☆35Mar 4, 2026Updated 2 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆90May 9, 2025Updated 11 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆60Sep 11, 2025Updated 7 months ago
- ☆25Jul 20, 2025Updated 9 months ago
- ☆20May 14, 2025Updated 11 months ago
- [ICML 2025] Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions☆14Mar 7, 2026Updated last month
- OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents [NeurIPS 2025 Spotlight]☆67Sep 18, 2025Updated 7 months ago
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated last year
- ☆15Jun 7, 2024Updated last year
- ☆130Feb 3, 2025Updated last year
- ☆40Oct 2, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A curated list of materials on AI guardrails☆52Jun 3, 2025Updated 11 months ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated 2 years ago
- Accepted by ECCV 2024☆203Oct 15, 2024Updated last year
- [ACL 2024] Benchmarking Knowledge Boundary for Large Language Models: A Different Perspective on Model Evaluation☆10May 26, 2024Updated last year
- First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and saf…☆61Mar 7, 2026Updated last month
- ☆29Aug 31, 2025Updated 8 months ago
- ☆31Sep 23, 2024Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- ☆32May 22, 2025Updated 11 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆60Jul 21, 2025Updated 9 months ago
- [NeurIPS 2025] The official implementation of the paper "DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agen…☆48Apr 19, 2026Updated 2 weeks ago
- ☆22Oct 25, 2024Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆283Jul 28, 2025Updated 9 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆33Jun 23, 2025Updated 10 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- ☆132Dec 3, 2025Updated 5 months ago