YiyiyiZhao / sirenView external linksLinks
Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models (LLMs). This repository contains the resources for reproducing the experiments described in our work.
☆15Sep 12, 2025Updated 5 months ago
Alternatives and similar repositories for siren
Users that are interested in siren are comparing it to the libraries listed below
Sorting:
- Red Queen Dataset and data generation template☆25Dec 26, 2025Updated last month
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- ☆55May 21, 2025Updated 8 months ago
- ☆17Jul 26, 2025Updated 6 months ago
- ☆121Feb 3, 2025Updated last year
- ☆26Mar 17, 2025Updated 11 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Aug 10, 2024Updated last year
- ☆24Jun 17, 2025Updated 7 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Dec 30, 2024Updated last year
- [Neurips 2025]StegoZip: Enhancing Linguistic Steganography Payload in Practice with Large Language Models☆24Dec 4, 2025Updated 2 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆84Nov 3, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack(TPAMI 2025)☆43Aug 28, 2025Updated 5 months ago
- ☆14Aug 7, 2025Updated 6 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆163May 2, 2025Updated 9 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆35Oct 15, 2023Updated 2 years ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- Program uses cv2 to display many streams from cameras, web pages, local files☆14Jan 31, 2021Updated 5 years ago
- ☆12Oct 29, 2023Updated 2 years ago
- The Pair App is employed by the Agency of Learning for team management and communication.☆10Apr 13, 2024Updated last year
- [EMNLP 2024 Findings] Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information☆13Oct 1, 2024Updated last year
- Q&A dataset for many-shot jailbreaking☆14Jul 19, 2024Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- ☆11Nov 12, 2024Updated last year
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 9 months ago
- 🌟 手把手教你在论文中插入代码链接☆24Aug 2, 2025Updated 6 months ago
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- ☆16Sep 1, 2025Updated 5 months ago
- Adversarial Attack for Pre-trained Code Models☆10Jul 19, 2022Updated 3 years ago
- Code repository for the paper "The Inherent Limits of Pretrained LLMs: The Unexpected Convergence of Instruction Tuning and In-Context Le…☆13Jan 16, 2025Updated last year
- ☆14Jun 4, 2025Updated 8 months ago
- ☆10Aug 17, 2018Updated 7 years ago
- Auto1111 port of NVlab's adversarial purification method that uses the forward and reverse processes of diffusion models to remove advers…☆13Aug 8, 2023Updated 2 years ago
- ☆10Apr 29, 2020Updated 5 years ago
- ☆18Oct 20, 2024Updated last year
- AIBOM Workshop RSA 2024☆15May 20, 2024Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- ☆12Sep 10, 2024Updated last year