Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT
β37Oct 15, 2023Updated 2 years ago
Alternatives and similar repositories for LLM-Multistep-Jailbreak
Users that are interested in LLM-Multistep-Jailbreak are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMsβ12Nov 7, 2024Updated last year
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β81Jun 6, 2024Updated last year
- Code for ACL 2024 paper: PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models.β16Feb 5, 2025Updated last year
- β21Apr 23, 2025Updated 11 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word predβ¦β104Aug 13, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting β’ AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- β716Jul 2, 2025Updated 9 months ago
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models β¦β15Sep 12, 2025Updated 7 months ago
- [ICML 2025] Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactionsβ14Mar 7, 2026Updated last month
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"β22May 6, 2025Updated 11 months ago
- β127Feb 3, 2025Updated last year
- Red Queen Dataset and data generation templateβ26Dec 26, 2025Updated 3 months ago
- CVPR 2023 generalistβ16Oct 25, 2023Updated 2 years ago
- Code for "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search" (NeurIPS 2024)β18Oct 22, 2024Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".β171May 2, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"β62Aug 8, 2024Updated last year
- β16Sep 4, 2024Updated last year
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Typesβ25Nov 29, 2024Updated last year
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)β24Apr 26, 2025Updated 11 months ago
- β29Aug 31, 2025Updated 7 months ago
- Accepted by ECCV 2024β200Oct 15, 2024Updated last year
- β19Jun 18, 2025Updated 9 months ago
- We jailbreak GPT-3.5 Turboβs safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20β¦β346Feb 23, 2024Updated 2 years ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Promptsβ199Jun 26, 2025Updated 9 months ago
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)β161Nov 30, 2024Updated last year
- Code for Findings-ACL 2023 paper: Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recβ¦β46Jun 3, 2024Updated last year
- Extracting Cultural Commonsense Knowledge at Scale (WWW 2023)β11Feb 15, 2024Updated 2 years ago
- β24Aug 18, 2023Updated 2 years ago
- β38Oct 15, 2024Updated last year
- β39May 17, 2025Updated 10 months ago
- β11Dec 22, 2025Updated 3 months ago
- General research for Dreadnodeβ26Jun 17, 2024Updated last year
- On the Robustness of GUI Grounding Models Against Image Attacksβ12Apr 8, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI β’ AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- β48May 9, 2024Updated last year
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLMβ39Jan 17, 2025Updated last year
- β27Jun 5, 2024Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"β66Oct 27, 2024Updated last year
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settingsβ19Sep 1, 2025Updated 7 months ago
- β198Nov 26, 2023Updated 2 years ago
- β28Oct 14, 2021Updated 4 years ago