The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!
☆15Apr 8, 2025Updated 11 months ago
Alternatives and similar repositories for OpenSafeMLRM
Users that are interested in OpenSafeMLRM are comparing it to the libraries listed below
Sorting:
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 11 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆88Aug 25, 2025Updated 6 months ago
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆14Feb 6, 2024Updated 2 years ago
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆22Aug 28, 2025Updated 6 months ago
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Jul 27, 2022Updated 3 years ago
- ☆24Feb 17, 2026Updated last month
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆27Mar 15, 2025Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 6 months ago
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆21Jan 31, 2026Updated last month
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆24Dec 12, 2024Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆27Dec 21, 2025Updated 3 months ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆26Sep 10, 2024Updated last year
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆77Mar 1, 2025Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆32Jun 23, 2025Updated 8 months ago
- an official PyTorch implementation of the paper "Partial Network Cloning", CVPR 2023☆13Mar 21, 2023Updated 3 years ago
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆62Mar 2, 2026Updated 2 weeks ago
- ☆12Sep 28, 2023Updated 2 years ago
- [ICLR 2025] MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs☆43Mar 25, 2025Updated 11 months ago
- This is the official code repository for paper "Exploiting the Adversarial Example Vulnerability of Transfer Learning of Source Code".☆17Sep 21, 2025Updated 6 months ago
- ☆15Jul 24, 2024Updated last year
- ☆11Oct 15, 2024Updated last year
- EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection☆18Jan 6, 2025Updated last year
- FakePartsBench: 25K+ AI-generated videos with pixel- and frame-level annotations of full and partial deepfakes.☆24Aug 31, 2025Updated 6 months ago
- Prompt Generator model for Stable Diffusion Models☆12Jun 20, 2023Updated 2 years ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 6 months ago
- Using PCA, Autoencoder and Fisher linear discriminant to extract the effective representations from the face images. Do the reconstructio…☆12Apr 23, 2019Updated 6 years ago
- Numpy手写BP神经网络,对比Dropout、Batch Normalization等训练技巧的效果。☆11Dec 19, 2019Updated 6 years ago
- ☆14Feb 26, 2025Updated last year
- Towards Robustness of Deep Program Processing Models – Detection, Estimation and Enhancement☆21Oct 29, 2022Updated 3 years ago
- Code and instructions accompanying ICCV'23 paper Protoype-based Dataset Comparison☆18Dec 15, 2023Updated 2 years ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated 2 months ago
- Code and Model For SAA☆12Sep 21, 2023Updated 2 years ago
- [NeurIPS 2025@FoRLM] R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search☆17Jan 24, 2026Updated last month
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 7 months ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Jul 15, 2024Updated last year
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆16Mar 31, 2025Updated 11 months ago
- [WSDM 2026] LookAhead Tuning: Safer Language Models via Partial Answer Previews☆17Dec 14, 2025Updated 3 months ago