☆105Aug 11, 2025Updated 6 months ago
Alternatives and similar repositories for Agent-SafetyBench
Users that are interested in Agent-SafetyBench are comparing it to the libraries listed below
Sorting:
- Agent Security Bench (ASB)☆186Oct 27, 2025Updated 4 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆99Jan 11, 2026Updated last month
- ☆23Jan 17, 2025Updated last year
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆33Aug 4, 2025Updated 7 months ago
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- [ACL 2025] LongSafety: Evaluating Long-Context Safety of Large Language Models☆16Jun 18, 2025Updated 8 months ago
- [ICLR 2026] BARREL: Boundary-Aware Reasoning for Factual and Reliable LRMs☆17May 21, 2025Updated 9 months ago
- ☆37Oct 15, 2024Updated last year
- Reasoning Activation in LLMs via Small Model Transfer (NeurIPS 2025)☆21Oct 16, 2025Updated 4 months ago
- ☆15Jun 7, 2024Updated last year
- [ICLR 2026] The official code for "Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models"☆23Feb 7, 2026Updated 3 weeks ago
- Code implementation of R^2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning☆22Jul 8, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 9 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Jun 20, 2025Updated 8 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 9 months ago
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols☆30Sep 24, 2025Updated 5 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆454Feb 3, 2026Updated last month
- Accepted by ECCV 2024☆192Oct 15, 2024Updated last year
- ☆178Oct 31, 2025Updated 4 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆54Jul 21, 2025Updated 7 months ago
- ☆23Oct 25, 2024Updated last year
- Efficient LLM query routing via multi-sampling. BEST-Route selects both model and number of responses based on query difficulty, cutting …☆44Aug 6, 2025Updated 7 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆88Aug 25, 2025Updated 6 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago
- ☆23Oct 11, 2024Updated last year
- NeurIPS'24 - LLM Safety Landscape☆39Oct 21, 2025Updated 4 months ago
- ☆29Aug 31, 2025Updated 6 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- MAKGED is the first multi-agent framework for collaborative error detection in knowledge graphs.☆30Jul 20, 2025Updated 7 months ago
- A toolkit to assess data privacy in LLMs (under development)☆68Jan 2, 2025Updated last year
- ☆40Aug 10, 2024Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆273Jul 28, 2025Updated 7 months ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 10 months ago
- Repository for the "Understanding and Mitigating Language Confusion in LLMs" paper☆29Jun 28, 2024Updated last year
- This repository is a curated list of papers and open source code about competition for CV Adversarial Attack.☆27Jan 9, 2023Updated 3 years ago