thu-ml / MLA-TrustLinks
A toolbox for benchmarking Multimodal LLM Agents trustworthiness across truthfulness, controllability, safety and privacy dimensions through 34 interactive tasks
☆57Updated 5 months ago
Alternatives and similar repositories for MLA-Trust
Users that are interested in MLA-Trust are comparing it to the libraries listed below
Sorting:
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆53Updated 4 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆170Updated 5 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆68Updated last year
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Updated 7 months ago
- Focused on the safety and security of Embodied AI☆81Updated last month
- ☆50Updated 9 months ago
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agents☆47Updated 2 weeks ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆78Updated this week
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆85Updated 9 months ago
- Accepted by ECCV 2024☆177Updated last year
- The reinforcement learning codes for dataset SPA-VL☆42Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆76Updated last year
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆32Updated 6 months ago
- ☆25Updated 8 months ago
- ☆66Updated 7 months ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆24Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆33Updated last year
- Data and Code for Paper IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks☆31Updated this week
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Updated 5 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆80Updated 3 months ago
- Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"☆57Updated 9 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆115Updated 9 months ago
- ☆47Updated 6 months ago
- Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics☆50Updated 3 months ago
- Accepted by IJCAI-24 Survey Track☆223Updated last year
- ☆25Updated last year
- ☆69Updated 6 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆179Updated 5 months ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆15Updated 4 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆81Updated 6 months ago