arobey1 / smooth-llmView external linksLinks
☆122Nov 13, 2023Updated 2 years ago
Alternatives and similar repositories for smooth-llm
Users that are interested in smooth-llm are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Oct 29, 2024Updated last year
- ☆51Aug 10, 2024Updated last year
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆338Feb 23, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 9 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- ☆193Nov 26, 2023Updated 2 years ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- ☆696Jul 2, 2025Updated 7 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- Official Repo of Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents☆58Oct 28, 2025Updated 3 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆527Apr 4, 2025Updated 10 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆65Aug 25, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆106May 20, 2025Updated 8 months ago
- ☆57Jun 5, 2024Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 8 months ago
- ☆72Mar 30, 2025Updated 10 months ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆75Mar 1, 2025Updated 11 months ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆36May 31, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆97Mar 7, 2024Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆219Dec 10, 2024Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Jul 9, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆172Feb 20, 2024Updated last year
- ☆25Mar 16, 2025Updated 10 months ago
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang…☆21Jul 3, 2024Updated last year
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆28Aug 14, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Jan 22, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- ☆35Sep 13, 2023Updated 2 years ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆81Feb 6, 2026Updated last week
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Mar 22, 2024Updated last year
- ☆55May 21, 2025Updated 8 months ago
- ☆37Dec 19, 2024Updated last year