ServiceNow / DoomArenaLinks
DoomArena is a Framework for Testing AI Agents Against Evolving Security Threats
☆44Updated 2 weeks ago
Alternatives and similar repositories for DoomArena
Users that are interested in DoomArena are comparing it to the libraries listed below
Sorting:
- OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents [NeurIPS 2025 Spotlight]☆30Updated last week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆275Updated 3 weeks ago
- ☆31Updated 6 months ago
- Collection of evals for Inspect AI☆236Updated this week
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆66Updated 3 weeks ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆165Updated last year
- ☆150Updated 3 months ago
- Dataset for the Tensor Trust project☆45Updated last year
- Guardrails for secure and robust agent development☆346Updated 2 months ago
- Improving Alignment and Robustness with Circuit Breakers☆235Updated last year
- ☆133Updated this week
- ☆138Updated 2 months ago
- Code to break Llama Guard☆32Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆114Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆116Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆114Updated last year
- ☆39Updated 10 months ago
- Red-Teaming Language Models with DSPy☆213Updated 7 months ago
- ☆46Updated last year
- ☆57Updated this week
- A Comprehensive Assessment of Trustworthiness in GPT Models☆303Updated last year
- ☆63Updated this week
- ☆34Updated 10 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆105Updated this week
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆91Updated 9 months ago
- A simple evaluation of generative language models and safety classifiers.☆64Updated this week
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆61Updated 3 months ago
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆48Updated 2 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆70Updated 2 months ago