Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models
☆27Mar 15, 2025Updated 11 months ago
Alternatives and similar repositories for MMDT
Users that are interested in MMDT are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆66Nov 14, 2025Updated 3 months ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 8 months ago
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆14Apr 8, 2025Updated 10 months ago
- ☆16Apr 7, 2025Updated 10 months ago
- ☆17Jun 18, 2025Updated 8 months ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- ☆16Feb 23, 2025Updated last year
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆32Jun 24, 2025Updated 8 months ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Apr 15, 2025Updated 10 months ago
- (NeurIPS 2025) Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"☆47Jun 3, 2025Updated 8 months ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆23Apr 2, 2024Updated last year
- ☆22May 28, 2025Updated 9 months ago
- The collection of papers about Private Evolution☆18Oct 7, 2025Updated 4 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Jun 23, 2025Updated 8 months ago
- ☆21Mar 17, 2025Updated 11 months ago
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆29Oct 18, 2023Updated 2 years ago
- ☆24Dec 8, 2024Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆66Jun 9, 2025Updated 8 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆313Sep 16, 2024Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- [TMLR 2025] On Memorization in Diffusion Models☆31Oct 5, 2023Updated 2 years ago
- ☆29Aug 31, 2025Updated 6 months ago
- ☆27Nov 9, 2022Updated 3 years ago
- ☆28Dec 18, 2025Updated 2 months ago
- ☆175Oct 31, 2025Updated 4 months ago
- Auditing agents for fine-tuning safety☆20Oct 21, 2025Updated 4 months ago
- ☆11Dec 23, 2024Updated last year
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- Preparing for ML Interviews.☆54Jan 12, 2026Updated last month
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆92Dec 20, 2025Updated 2 months ago
- ☆12May 6, 2022Updated 3 years ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- BrainWash: A Poisoning Attack to Forget in Continual Learning☆12Apr 15, 2024Updated last year
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆50Mar 13, 2025Updated 11 months ago
- A Benchmark for Evaluating Safety and Trustworthiness in Web Agents for Enterprise Scenarios☆19Updated this week
- [KDD'23] This is the code repo for our KDD'23 paper "DyGen: Learning from Noisy Labels via Dynamics-Enhanced Generative Modeling".☆11Jun 14, 2023Updated 2 years ago
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 10 months ago
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated last year