☆119Apr 27, 2025Updated 10 months ago
Alternatives and similar repositories for jailbreak-reasoning-openai-o1o3-deepseek-r1
Users that are interested in jailbreak-reasoning-openai-o1o3-deepseek-r1 are comparing it to the libraries listed below
Sorting:
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- ☆12Jul 16, 2025Updated 8 months ago
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆50Jul 24, 2024Updated last year
- ☆13Jan 14, 2026Updated 2 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Sep 16, 2024Updated last year
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated 11 months ago
- Emoji Attack [ICML 2025]☆41Jul 15, 2025Updated 8 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆196Jun 26, 2025Updated 8 months ago
- ☆124Feb 3, 2025Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 6 months ago
- ☆49Feb 25, 2026Updated 3 weeks ago
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Jul 2, 2025Updated 8 months ago
- ☆11Sep 10, 2024Updated last year
- Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem☆20Oct 27, 2021Updated 4 years ago
- ☆129Jul 7, 2025Updated 8 months ago
- Benchmarking Physical Risk Awareness of Foundation Model-based Embodied AI Agents☆23Nov 28, 2024Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- ☆24Feb 17, 2026Updated last month
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- The code implementation for the article "Towards Patronizing and Condescending Language in Chinese Videos: A Multimodal Dataset and Fram…☆16Apr 3, 2025Updated 11 months ago
- ☆14Oct 6, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- Codebase used to generate the results for NeurIPS23 "Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directi…☆13Dec 8, 2023Updated 2 years ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 5 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆51Jan 11, 2025Updated last year
- Case study on evaluating statistical tools that predict recidivism.☆15Aug 1, 2024Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆88Feb 26, 2025Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆546Apr 4, 2025Updated 11 months ago
- ☆48Jul 14, 2024Updated last year
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆23Oct 23, 2024Updated last year
- an official PyTorch implementation of the paper "Partial Network Cloning", CVPR 2023☆13Mar 21, 2023Updated 3 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- Official Code for EMNLP 2023 paper: "Unveiling the Implicit Toxicity in Large Language Models""☆15Nov 30, 2023Updated 2 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆14Mar 18, 2024Updated 2 years ago