Implementation of BEAST adversarial attack for language models (ICML 2024)
☆89May 14, 2024Updated last year
Alternatives and similar repositories for BEAST
Users that are interested in BEAST are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆25Jan 17, 2025Updated last year
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆50Jul 24, 2024Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆180May 6, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆196Jun 26, 2025Updated 8 months ago
- ☆25Jun 17, 2025Updated 9 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Feb 22, 2024Updated 2 years ago
- An unofficial implementation of AutoDAN attack on LLMs (arXiv:2310.15140)☆45Feb 8, 2024Updated 2 years ago
- A Large Action Model designed to operate on MacOS or Windows which interacts with common C2 interfaces such as Cobalt Strike, Havoc, or B…☆26Feb 29, 2024Updated 2 years ago
- ☆16May 30, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆118Feb 7, 2024Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆321May 13, 2025Updated 10 months ago
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- Example code for the NNGeometry PyTorch library☆10Aug 20, 2025Updated 7 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆380Jan 23, 2025Updated last year
- ☆27Jun 5, 2024Updated last year
- ☆60Mar 9, 2023Updated 3 years ago
- [Findings of ACL 2023] Bridge the Gap Between CV and NLP! A Optimization-based Textual Adversarial Attack Framework.☆14Aug 27, 2023Updated 2 years ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- Fluent student-teacher redteaming☆23Jul 25, 2024Updated last year
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Aug 17, 2025Updated 7 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆35Nov 8, 2024Updated last year
- ☆19Jun 5, 2023Updated 2 years ago
- ☆60Jun 5, 2024Updated last year
- ☆28Mar 20, 2024Updated 2 years ago
- NVIDIA’s repository for enabling trustworthy AI.☆28Mar 3, 2026Updated 2 weeks ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 10 months ago
- ☆28Sep 21, 2024Updated last year
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- (AAAI 2024) Transferable Adversarial Attacks for Object Detection using Object-Aware Significant Feature Distortion☆16Dec 13, 2023Updated 2 years ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆53Sep 19, 2023Updated 2 years ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆168May 2, 2025Updated 10 months ago