yueliu1999 / FlipAttack
[ICLR Workshop 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".
☆103Updated 3 weeks ago
Alternatives and similar repositories for FlipAttack:
Users that are interested in FlipAttack are comparing it to the libraries listed below
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆91Updated 2 months ago
- [CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models☆129Updated last week
- [NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time☆56Updated 6 months ago
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack☆33Updated 4 months ago
- [USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malici…☆60Updated 5 months ago
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆41Updated 11 months ago
- Agent Security Bench (ASB)☆69Updated this week
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆33Updated last year
- [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, …☆65Updated 2 years ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆132Updated 4 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆59Updated 2 months ago
- We leverage 14 datasets as OOD test data and conduct evaluations on 8 NLU tasks over 21 popularly used models. Our findings confirm that …☆94Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆65Updated 6 months ago
- ☆193Updated 3 weeks ago
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆23Updated this week
- This includes the original implementation of CtrlA: Adaptive Retrieval-Augmented Generation via Inherent Control.☆60Updated 5 months ago
- Benchmarking LLMs via Uncertainty Quantification☆217Updated last year
- [NeurIPS 2024] GuardT2I: Defending Text-to-Image Models from Adversarial Prompts☆20Updated 4 months ago
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆164Updated 3 months ago
- ☆46Updated 5 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆51Updated 7 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆19Updated 8 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆41Updated 2 months ago
- Code for ACL 2024 long paper: Are AI-Generated Text Detectors Robust to Adversarial Perturbations?☆27Updated 8 months ago
- LLM Benchmark for Code☆31Updated 7 months ago
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response☆40Updated 3 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆48Updated 5 months ago
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆108Updated 5 months ago
- [ACL 2024] CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and …☆97Updated 8 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆43Updated 5 months ago