segev-shlomov / ST-WebAgentBenchLinks
A Benchmark for Evaluating Safety and Trustworthiness in Web Agents for Enterprise Scenarios
☆16Updated 5 months ago
Alternatives and similar repositories for ST-WebAgentBench
Users that are interested in ST-WebAgentBench are comparing it to the libraries listed below
Sorting:
- ☆22Updated last year
- ☆17Updated last year
- [EMNLP 2024] Multi-modal reasoning problems via code generation.☆26Updated 8 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆171Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆53Updated 8 months ago
- ☆68Updated last year
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆52Updated 3 months ago
- Attack to induce LLMs within hallucinations☆161Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆88Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆83Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆328Updated last year
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆35Updated 2 years ago
- ☆122Updated this week
- ☆187Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆163Updated 2 years ago
- [EMNLP 2024] A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models.☆18Updated last year
- [ArXiv 2024] Denial-of-Service Poisoning Attacks on Large Language Models☆22Updated last year
- ☆48Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆190Updated 9 months ago
- ☆85Updated last year
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆15Updated 2 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆174Updated 5 months ago
- An LLM can Fool Itself: A Prompt-Based Adversarial Attack (ICLR 2024)☆105Updated 9 months ago
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆31Updated last year
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agents☆45Updated this week
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆105Updated last year
- ☆46Updated last year
- Papers about red teaming LLMs and Multimodal models.☆145Updated 5 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆44Updated 10 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆90Updated last year