Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives
☆70Feb 22, 2024Updated 2 years ago
Alternatives and similar repositories for carving
Users that are interested in carving are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implementation of GOAT model (ICML2023)☆38Jul 3, 2023Updated 2 years ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Apr 15, 2025Updated 11 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆89May 14, 2024Updated last year
- Official repo for Detecting, Explaining, and Mitigating Memorization in Diffusion Models (ICLR 2024)☆78Apr 3, 2024Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆69Feb 5, 2024Updated 2 years ago
- ☆18Oct 12, 2022Updated 3 years ago
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆22Aug 9, 2025Updated 7 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆98Nov 17, 2024Updated last year
- An empirical investigation of deep learning theory☆16Oct 3, 2019Updated 6 years ago
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Aug 17, 2025Updated 7 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆200May 28, 2024Updated last year
- An unofficial implementation of AutoDAN attack on LLMs (arXiv:2310.15140)☆45Feb 8, 2024Updated 2 years ago
- ☆33Nov 27, 2023Updated 2 years ago
- Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.☆113Nov 22, 2023Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Feb 6, 2023Updated 3 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Sep 3, 2019Updated 6 years ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Apr 28, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Official github page for the paper "Evaluating Deep Unlearning in Large Language Model"☆14Apr 25, 2025Updated 11 months ago
- Tests that check correctness of a single statement☆14Nov 25, 2024Updated last year
- Code for the paper "Fishing for Magikarp"☆182Mar 12, 2026Updated last week
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- Python package for measuring memorization in LLMs.☆184Jul 16, 2025Updated 8 months ago
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆19Sep 1, 2025Updated 6 months ago
- ☆19Mar 19, 2023Updated 3 years ago
- Skill-Inject: Measuring Agent Vulnerability to Skill File Attacks☆36Feb 24, 2026Updated last month
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆24Dec 12, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks☆44Sep 23, 2021Updated 4 years ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆154Nov 3, 2024Updated last year
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆31Oct 9, 2025Updated 5 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 10 months ago
- PyTorch Implementation of Zero-Shot Vision Encoder Grafting via LLM Surrogates [ICCV'25]☆53Jul 10, 2025Updated 8 months ago
- ☆646Aug 4, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago