AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM
☆83Nov 3, 2024Updated last year
Alternatives and similar repositories for AmpleGCG
Users that are interested in AmpleGCG are comparing it to the libraries listed below
Sorting:
- ☆23Jan 17, 2025Updated last year
- ☆24Jun 17, 2025Updated 8 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆319May 13, 2025Updated 9 months ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 10 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆57Aug 17, 2024Updated last year
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models …☆15Sep 12, 2025Updated 5 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Fluent student-teacher redteaming☆23Jul 25, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆864Aug 16, 2024Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 6 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆128Feb 24, 2025Updated last year
- Red Queen Dataset and data generation template☆26Dec 26, 2025Updated 2 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆815Mar 27, 2025Updated 11 months ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆22Sep 21, 2025Updated 5 months ago
- ☆698Jul 2, 2025Updated 8 months ago
- bulk image downloader freeware, reddit bulk image downloader, bulk image downloader extension, bulk image downloader from url, bulk image…☆25Feb 19, 2026Updated 2 weeks ago
- ☆20Feb 11, 2024Updated 2 years ago
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆341Feb 23, 2024Updated 2 years ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆535Apr 4, 2025Updated 11 months ago
- ☆72Mar 30, 2025Updated 11 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 10 months ago
- ☆23Oct 25, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆90May 14, 2024Updated last year
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year