☆121Apr 27, 2025Updated last year
Alternatives and similar repositories for jailbreak-reasoning-openai-o1o3-deepseek-r1
Users that are interested in jailbreak-reasoning-openai-o1o3-deepseek-r1 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- ☆12Jul 16, 2025Updated 9 months ago
- ☆13Jan 14, 2026Updated 3 months ago
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆53Jul 24, 2024Updated last year
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆40May 17, 2025Updated 11 months ago
- [ACL 2025] Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms☆40Jun 4, 2025Updated 10 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆202Jun 26, 2025Updated 10 months ago
- ☆129Feb 3, 2025Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 8 months ago
- ☆35Jun 28, 2025Updated 10 months ago
- ☆49Feb 25, 2026Updated 2 months ago
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Jul 2, 2025Updated 9 months ago
- ☆136Jul 7, 2025Updated 9 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Benchmarking Physical Risk Awareness of Foundation Model-based Embodied AI Agents☆23Nov 28, 2024Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- The code implementation for the article "Towards Patronizing and Condescending Language in Chinese Videos: A Multimodal Dataset and Fram…☆16Apr 3, 2025Updated last year
- ☆14Oct 6, 2024Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆89May 9, 2025Updated 11 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆172May 2, 2025Updated 11 months ago
- Codebase used to generate the results for NeurIPS23 "Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directi…☆13Dec 8, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 6 months ago
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆21Mar 18, 2025Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Case study on evaluating statistical tools that predict recidivism.☆15Aug 1, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆936Aug 16, 2024Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆89Feb 26, 2025Updated last year
- ☆48Jul 14, 2024Updated last year
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated 2 years ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆51Jan 11, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆582Apr 4, 2025Updated last year
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆23Oct 23, 2024Updated last year
- an official PyTorch implementation of the paper "Partial Network Cloning", CVPR 2023☆13Mar 21, 2023Updated 3 years ago
- ☆20Oct 28, 2025Updated 6 months ago
- Official Code for EMNLP 2023 paper: "Unveiling the Implicit Toxicity in Large Language Models""☆15Nov 30, 2023Updated 2 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆15Mar 18, 2024Updated 2 years ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆317Jan 11, 2026Updated 3 months ago