Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"
☆65Feb 25, 2025Updated last year
Alternatives and similar repositories for SafeAgentBench
Users that are interested in SafeAgentBench are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Jun 23, 2025Updated 8 months ago
- ☆14Feb 26, 2025Updated last year
- [NeurIPS 2025 Spotlight] Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning.☆123Jan 11, 2026Updated last month
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agents☆114Dec 2, 2025Updated 3 months ago
- ☆56May 21, 2025Updated 9 months ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- ☆14Mar 1, 2019Updated 7 years ago
- [ACL 2025] "World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning." https://arxiv.org/abs/2503.1…☆17Jul 22, 2025Updated 7 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- ☆121Feb 3, 2025Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal …☆80Jun 6, 2024Updated last year
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- Code for IROS 2024 paper "AutoNeRF: Training Implicit Scene Representations with Autonomous Agents"☆17Oct 24, 2024Updated last year
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆28Aug 14, 2024Updated last year
- ☆77Dec 19, 2024Updated last year
- Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics☆68Jan 27, 2026Updated last month
- ☆18Mar 30, 2025Updated 11 months ago
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆33Aug 4, 2025Updated 7 months ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- Benchmarking Physical Risk Awareness of Foundation Model-based Embodied AI Agents☆23Nov 28, 2024Updated last year
- ☆25Nov 4, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆124Feb 19, 2025Updated last year
- Emoji Attack [ICML 2025]☆41Jul 15, 2025Updated 7 months ago
- This is the official implementation of the paper "Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness,"…☆19Jul 15, 2024Updated last year
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33May 21, 2025Updated 9 months ago
- Focused on the safety and security of Embodied AI☆97Dec 19, 2025Updated 2 months ago
- ☆72Mar 30, 2025Updated 11 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- ☆29Mar 3, 2021Updated 5 years ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆71Jul 13, 2025Updated 7 months ago
- ☆43Jan 13, 2025Updated last year
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆41Jun 26, 2025Updated 8 months ago
- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models☆34Oct 19, 2023Updated 2 years ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 9 months ago
- ☆39May 17, 2025Updated 9 months ago
- Enterprise AI Security Platform - Real-time firewall protection for LLM applications against prompt injection, data leakage, and function…☆23Sep 14, 2025Updated 5 months ago
- [ICLR 2023] Spiking Convolutional Neural Networks for Text Classification☆33Jul 12, 2024Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆85Jan 19, 2025Updated last year