☆131Nov 13, 2023Updated 2 years ago
Alternatives and similar repositories for smooth-llm
Users that are interested in smooth-llm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆57Nov 13, 2023Updated 2 years ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆33Oct 26, 2023Updated 2 years ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆11Oct 29, 2024Updated last year
- ☆732Jul 2, 2025Updated 10 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆350Feb 23, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated last year
- ☆200Nov 26, 2023Updated 2 years ago
- ☆53Aug 10, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆589Apr 4, 2025Updated last year
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆119Mar 26, 2024Updated 2 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97May 23, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 11 months ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆60Sep 11, 2025Updated 7 months ago
- ☆60Jun 5, 2024Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- NeurIPS'24 - LLM Safety Landscape☆40Oct 21, 2025Updated 6 months ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆81Mar 1, 2025Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆231Dec 10, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆176Feb 20, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆65May 21, 2025Updated 11 months ago
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆30Aug 14, 2024Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆36Jun 1, 2025Updated 11 months ago
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- Code for Voice Jailbreak Attacks Against GPT-4o.☆38May 31, 2024Updated last year
- Official Repo of Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents☆74Oct 28, 2025Updated 6 months ago
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang …☆22Jul 3, 2024Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Jul 9, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆76Mar 30, 2025Updated last year
- ☆48May 9, 2024Updated 2 years ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆354Oct 17, 2025Updated 6 months ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆848Mar 30, 2026Updated last month
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆440Jan 22, 2025Updated last year
- Repository for reproducing `Model-Based Robust Deep Learning`☆17Jan 22, 2021Updated 5 years ago
- Accepted by IJCAI-24 Survey Track☆231Aug 25, 2024Updated last year