Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives
☆70Feb 22, 2024Updated 2 years ago
Alternatives and similar repositories for carving
Users that are interested in carving are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implementation of GOAT model (ICML2023)☆38Jul 3, 2023Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆34Sep 28, 2025Updated 6 months ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆33Oct 26, 2023Updated 2 years ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated 11 months ago
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆22Aug 9, 2025Updated 8 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆98Nov 17, 2024Updated last year
- An empirical investigation of deep learning theory☆16Oct 3, 2019Updated 6 years ago
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Aug 17, 2025Updated 7 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆201May 28, 2024Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Feb 6, 2023Updated 3 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆225Dec 10, 2024Updated last year
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Sep 3, 2019Updated 6 years ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Apr 28, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official github page for the paper "Evaluating Deep Unlearning in Large Language Model"☆14Apr 25, 2025Updated 11 months ago
- Tests that check correctness of a single statement☆14Nov 25, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆436Jan 22, 2025Updated last year
- Python package for measuring memorization in LLMs.☆186Jul 16, 2025Updated 8 months ago
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆24Dec 12, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆260Sep 24, 2024Updated last year
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆155Nov 3, 2024Updated last year
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆31Oct 9, 2025Updated 6 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- PyTorch Implementation of Zero-Shot Vision Encoder Grafting via LLM Surrogates [ICCV'25]☆53Jul 10, 2025Updated 9 months ago
- ☆645Aug 4, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆576Feb 27, 2026Updated last month
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Jun 10, 2024Updated last year
- Official code for the paper "Provable Compositional Generalization for Object-Centric Learning" (ICLR 2024, oral)☆16Aug 26, 2024Updated last year
- ☆13Oct 21, 2021Updated 4 years ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆45Apr 21, 2024Updated last year
- ☆48Feb 25, 2026Updated last month
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆95Updated this week
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- ☆13Apr 22, 2024Updated last year
- ☆198Nov 26, 2023Updated 2 years ago
- NeurIPS 2024 tutorial on LLM Inference☆49Dec 10, 2024Updated last year
- ☆12Oct 20, 2023Updated 2 years ago