jiaxiaojunQAQ / I-GCG
Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)
☆99Updated last month
Alternatives and similar repositories for I-GCG
Users that are interested in I-GCG are comparing it to the libraries listed below
Sorting:
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆41Updated last year
- [CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models☆129Updated last month
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆114Updated 2 weeks ago
- [NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time☆55Updated 7 months ago
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack☆33Updated 6 months ago
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging (TIFS2024)☆34Updated 11 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆94Updated 2 weeks ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆162Updated last week
- The official implementation of paper "Invisible Backdoor Attack against Self-supervised Learning"☆11Updated 3 weeks ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆159Updated last year
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 4 months ago
- ☆41Updated last month
- Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)☆12Updated last year
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆110Updated 7 months ago
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆31Updated this week
- ☆119Updated last week
- 🔥🔥🔥Breaking long thought processes of o1-like LLMs, such as DeepSeek-R1, QwQ☆29Updated 2 months ago
- ☆23Updated 3 weeks ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆19Updated 6 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆19Updated 9 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆33Updated last year
- [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, …☆65Updated 2 years ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆12Updated last week
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆36Updated 2 months ago
- ☆40Updated 11 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆52Updated 11 months ago
- ☆13Updated 2 months ago
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbre…☆17Updated 2 weeks ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆46Updated 4 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆139Updated 2 months ago