Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"
☆88Feb 26, 2025Updated last year
Alternatives and similar repositories for STAIR
Users that are interested in STAIR are comparing it to the libraries listed below
Sorting:
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆174Jun 27, 2025Updated 8 months ago
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆21Apr 26, 2025Updated 10 months ago
- ☆20Jun 16, 2025Updated 8 months ago
- An implementation for MLLM oversensitivity evaluation☆17Nov 16, 2024Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- ☆28Jul 16, 2024Updated last year
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆58Jan 23, 2026Updated last month
- ☆26Mar 4, 2025Updated 11 months ago
- Prompt Generator model for Stable Diffusion Models☆11Jun 20, 2023Updated 2 years ago
- ☆109Feb 16, 2024Updated 2 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆19Sep 18, 2025Updated 5 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆11Oct 29, 2024Updated last year
- ☆11Aug 4, 2024Updated last year
- ☆11Jun 9, 2023Updated 2 years ago
- This is the code repo for our paper "Say More with Less: Understanding Prompt Learning Behaviors through Gist Compression".☆12Feb 27, 2024Updated 2 years ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 8 months ago
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- ☆48Feb 9, 2021Updated 5 years ago
- Code for “SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation(ICLR 2025)”☆24Oct 23, 2025Updated 4 months ago
- ☆35May 21, 2025Updated 9 months ago
- ☆70Feb 4, 2024Updated 2 years ago
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆16Mar 31, 2025Updated 11 months ago
- ☆11Apr 27, 2022Updated 3 years ago
- ☆14Jun 6, 2023Updated 2 years ago
- ☆21Jul 3, 2025Updated 7 months ago
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 6 months ago
- ☆60Mar 9, 2023Updated 2 years ago
- transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛☆24Oct 24, 2021Updated 4 years ago
- ☆19Jun 29, 2025Updated 8 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 9 months ago
- Accept by CVPR 2025 (highlight)☆22Jun 8, 2025Updated 8 months ago
- Official Code for reproductivity of the NeurIPS 2023 paper: Adversarial Examples Are Not Real Features☆16Jun 27, 2024Updated last year
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆39Aug 2, 2024Updated last year
- Code for "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search" (NeurIPS 2024)☆17Oct 22, 2024Updated last year
- Submission Guide + Discussion Board for AI Singapore Global Challenge for Safe and Secure LLMs (Track 1A).☆16Jul 4, 2024Updated last year
- Official implementation repository for the paper Towards General Conceptual Model Editing via Adversarial Representation Engineering.☆19Dec 6, 2024Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago