☆48May 9, 2024Updated last year
Alternatives and similar repositories for SAP
Users that are interested in SAP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆576Feb 27, 2026Updated last month
- Red Queen Dataset and data generation template☆26Dec 26, 2025Updated 3 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆109Mar 8, 2024Updated 2 years ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆57Nov 13, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆97Aug 15, 2025Updated 7 months ago
- ☆198Nov 26, 2023Updated 2 years ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆35Apr 11, 2024Updated 2 years ago
- ☆27Jun 5, 2024Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆178Oct 27, 2023Updated 2 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆438Jan 22, 2025Updated last year
- Official Implementation of "Simulating Environments with Reasoning Models for Agent Training"☆62Feb 18, 2026Updated last month
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆716Jul 2, 2025Updated 9 months ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆171May 2, 2025Updated 11 months ago
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆51May 21, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆199Jun 26, 2025Updated 9 months ago
- ☆39May 21, 2024Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆181May 6, 2024Updated last year
- Tasks for describing differences between text distributions.☆17Aug 9, 2024Updated last year
- ☆37Sep 30, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Updated this week
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆37Oct 15, 2023Updated 2 years ago
- ☆16Jul 20, 2023Updated 2 years ago
- Adversarially Robust Transfer Learning with LWF loss applied to the deep feature representation (penultimate) layer☆19Feb 9, 2020Updated 6 years ago
- ☆129Nov 13, 2023Updated 2 years ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆832Mar 30, 2026Updated last week
- ☆25Jan 17, 2025Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆346Feb 23, 2024Updated 2 years ago
- ☆16May 30, 2024Updated last year
- ☆25Jun 17, 2025Updated 9 months ago
- The official dataset of paper "Goal-Oriented Prompt Attack and Safety Evaluation for LLMs".☆21Feb 5, 2024Updated 2 years ago
- ☆50Aug 3, 2024Updated last year
- [IEEE TIP] Offical implementation for the work "BadCM: Invisible Backdoor Attack against Cross-Modal Learning".☆14Aug 30, 2024Updated last year
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated last year