yjw1029 / Self-ReminderView external linksLinks
Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.
☆56Nov 13, 2023Updated 2 years ago
Alternatives and similar repositories for Self-Reminder
Users that are interested in Self-Reminder are comparing it to the libraries listed below
Sorting:
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- Official code for ICCV2023 paper: Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis☆34Dec 27, 2023Updated 2 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- ☆27Apr 11, 2023Updated 2 years ago
- This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation"☆52Oct 24, 2024Updated last year
- ☆122Nov 13, 2023Updated 2 years ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆60Aug 23, 2024Updated last year
- ☆27Feb 26, 2023Updated 2 years ago
- The official code for ICML 2024 "FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Recons…☆29Jun 6, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆65Aug 25, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Jan 22, 2025Updated last year
- ☆193Nov 26, 2023Updated 2 years ago
- Red Queen Dataset and data generation template☆25Dec 26, 2025Updated last month
- ☆48May 9, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆173Feb 20, 2024Updated last year
- ☆25Mar 16, 2025Updated 10 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆106May 20, 2025Updated 8 months ago
- ☆18Mar 30, 2025Updated 10 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆338Feb 23, 2024Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- ☆164Sep 2, 2024Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆48May 21, 2024Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 9 months ago
- ☆50Apr 1, 2023Updated 2 years ago
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆21Dec 26, 2025Updated last month
- ☆51Aug 10, 2024Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- Display tensors directly from GPU☆11Oct 12, 2025Updated 4 months ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 8 months ago
- Official github repo of G-LLaVA☆148Feb 20, 2025Updated 11 months ago
- An unofficial implementation of AutoDAN attack on LLMs (arXiv:2310.15140)☆45Feb 8, 2024Updated 2 years ago
- ☆11Jul 17, 2024Updated last year
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 4 years ago
- Restore safety in fine-tuned language models through task arithmetic☆32Mar 28, 2024Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆173Apr 23, 2025Updated 9 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 7 months ago