TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models
☆19Aug 17, 2025Updated 6 months ago
Alternatives and similar repositories for AdversarialTriggers
Users that are interested in AdversarialTriggers are comparing it to the libraries listed below
Sorting:
- ☆23Jan 17, 2025Updated last year
- ☆37Dec 19, 2024Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- SafeArena is a benchmark for assessing the harmful capabilities of web agents☆21Apr 23, 2025Updated 10 months ago
- Official implementation of TBA for async LLM post-training.☆29Nov 5, 2025Updated 4 months ago
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 11 months ago
- ☆23Jun 13, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- ☆30Jun 19, 2023Updated 2 years ago
- ☆23Apr 5, 2023Updated 2 years ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆67Jun 9, 2025Updated 8 months ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- General research for Dreadnode☆27Jun 17, 2024Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆127Mar 22, 2024Updated last year
- Code repo for the model organisms and convergent directions of EM papers.☆53Sep 22, 2025Updated 5 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Feb 22, 2024Updated 2 years ago
- Algebraic value editing in pretrained language models☆69Nov 1, 2023Updated 2 years ago
- Code for "Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model", EMNLP Findings 20…☆28Nov 2, 2023Updated 2 years ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- ☆40May 17, 2025Updated 9 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Apr 20, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆152Nov 3, 2024Updated last year
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆41Oct 17, 2023Updated 2 years ago
- This repository contains the source code for "Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble", In Pro…☆10Jan 2, 2026Updated 2 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆379Jan 23, 2025Updated last year
- ☆11Mar 31, 2022Updated 3 years ago
- GIAnT, the Generic Implementation ANalysis Toolkit☆12Jul 4, 2018Updated 7 years ago
- ☆13Jul 20, 2023Updated 2 years ago
- Text file containing NSFW words aggregated from various sources.☆10Aug 23, 2020Updated 5 years ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆90May 14, 2024Updated last year
- [NeurIPS'23] Binary Classification with Confidence Difference☆10May 13, 2024Updated last year
- ☆12Oct 1, 2024Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Sep 11, 2023Updated 2 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- ☆43Feb 21, 2022Updated 4 years ago
- Spectral Alignment of Graphs☆11Mar 26, 2017Updated 8 years ago